00:00:00.000 Started by upstream project "autotest-nightly" build number 4281 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3644 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.168 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-cvl-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.169 The recommended git tool is: git 00:00:00.169 using credential 00000000-0000-0000-0000-000000000002 00:00:00.171 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-cvl-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.222 Fetching changes from the remote Git repository 00:00:00.224 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.265 Using shallow fetch with depth 1 00:00:00.265 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.265 > git --version # timeout=10 00:00:00.294 > git --version # 'git version 2.39.2' 00:00:00.294 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.315 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.315 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.567 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.578 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.590 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.590 > git config core.sparsecheckout # timeout=10 00:00:07.602 > git read-tree -mu HEAD # timeout=10 00:00:07.617 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.634 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.634 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.717 [Pipeline] Start of Pipeline 00:00:07.728 [Pipeline] library 00:00:07.729 Loading library shm_lib@master 00:00:07.729 Library shm_lib@master is cached. Copying from home. 00:00:07.742 [Pipeline] node 00:00:07.763 Running on WFP3 in /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:00:07.765 [Pipeline] { 00:00:07.774 [Pipeline] catchError 00:00:07.775 [Pipeline] { 00:00:07.785 [Pipeline] wrap 00:00:07.792 [Pipeline] { 00:00:07.798 [Pipeline] stage 00:00:07.799 [Pipeline] { (Prologue) 00:00:07.985 [Pipeline] sh 00:00:08.875 + logger -p user.info -t JENKINS-CI 00:00:08.907 [Pipeline] echo 00:00:08.908 Node: WFP3 00:00:08.915 [Pipeline] sh 00:00:09.254 [Pipeline] setCustomBuildProperty 00:00:09.266 [Pipeline] echo 00:00:09.267 Cleanup processes 00:00:09.272 [Pipeline] sh 00:00:09.565 + sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:09.565 60795 sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:09.580 [Pipeline] sh 00:00:09.874 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:09.874 ++ grep -v 'sudo pgrep' 00:00:09.874 ++ awk '{print $1}' 00:00:09.874 + sudo kill -9 00:00:09.874 + true 00:00:09.891 [Pipeline] cleanWs 00:00:09.904 [WS-CLEANUP] Deleting project workspace... 00:00:09.904 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.916 [WS-CLEANUP] done 00:00:09.921 [Pipeline] setCustomBuildProperty 00:00:09.937 [Pipeline] sh 00:00:10.227 + sudo git config --global --replace-all safe.directory '*' 00:00:10.334 [Pipeline] httpRequest 00:00:12.297 [Pipeline] echo 00:00:12.299 Sorcerer 10.211.164.20 is alive 00:00:12.308 [Pipeline] retry 00:00:12.310 [Pipeline] { 00:00:12.322 [Pipeline] httpRequest 00:00:12.327 HttpMethod: GET 00:00:12.327 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.328 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.349 Response Code: HTTP/1.1 200 OK 00:00:12.349 Success: Status code 200 is in the accepted range: 200,404 00:00:12.350 Saving response body to /var/jenkins/workspace/nvmf-cvl-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:37.587 [Pipeline] } 00:00:37.604 [Pipeline] // retry 00:00:37.611 [Pipeline] sh 00:00:37.901 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:37.918 [Pipeline] httpRequest 00:00:38.456 [Pipeline] echo 00:00:38.458 Sorcerer 10.211.164.20 is alive 00:00:38.467 [Pipeline] retry 00:00:38.469 [Pipeline] { 00:00:38.482 [Pipeline] httpRequest 00:00:38.487 HttpMethod: GET 00:00:38.487 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:38.488 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:38.508 Response Code: HTTP/1.1 200 OK 00:00:38.508 Success: Status code 200 is in the accepted range: 200,404 00:00:38.508 Saving response body to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:18.608 [Pipeline] } 00:01:18.624 [Pipeline] // retry 00:01:18.629 [Pipeline] sh 00:01:18.914 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:21.466 [Pipeline] sh 00:01:21.752 + git -C spdk log --oneline -n5 00:01:21.752 d47eb51c9 bdev: fix a race between reset start and complete 00:01:21.752 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:21.752 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:21.752 4bcab9fb9 correct kick for CQ full case 00:01:21.752 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:21.763 [Pipeline] } 00:01:21.777 [Pipeline] // stage 00:01:21.787 [Pipeline] stage 00:01:21.789 [Pipeline] { (Prepare) 00:01:21.806 [Pipeline] writeFile 00:01:21.823 [Pipeline] sh 00:01:22.109 + logger -p user.info -t JENKINS-CI 00:01:22.123 [Pipeline] sh 00:01:22.410 + logger -p user.info -t JENKINS-CI 00:01:22.423 [Pipeline] sh 00:01:22.709 + cat autorun-spdk.conf 00:01:22.709 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.709 SPDK_TEST_NVMF=1 00:01:22.709 SPDK_TEST_NVME_CLI=1 00:01:22.709 SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:22.709 SPDK_TEST_NVMF_NICS=e810 00:01:22.709 SPDK_RUN_ASAN=1 00:01:22.709 SPDK_RUN_UBSAN=1 00:01:22.709 NET_TYPE=phy 00:01:22.717 RUN_NIGHTLY=1 00:01:22.721 [Pipeline] readFile 00:01:22.765 [Pipeline] withEnv 00:01:22.766 [Pipeline] { 00:01:22.779 [Pipeline] sh 00:01:23.068 + set -ex 00:01:23.068 + [[ -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf ]] 00:01:23.068 + source /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:01:23.068 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.068 ++ SPDK_TEST_NVMF=1 00:01:23.068 ++ SPDK_TEST_NVME_CLI=1 00:01:23.068 ++ SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:23.068 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.068 ++ SPDK_RUN_ASAN=1 00:01:23.068 ++ SPDK_RUN_UBSAN=1 00:01:23.068 ++ NET_TYPE=phy 00:01:23.068 ++ RUN_NIGHTLY=1 00:01:23.068 + case $SPDK_TEST_NVMF_NICS in 00:01:23.068 + DRIVERS=ice 00:01:23.068 + [[ rdma == \r\d\m\a ]] 00:01:23.068 + DRIVERS+=' irdma' 00:01:23.068 + [[ -n ice irdma ]] 00:01:23.068 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:23.068 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:23.068 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:23.068 rmmod: ERROR: Module i40iw is not currently loaded 00:01:23.068 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:23.068 + true 00:01:23.068 + for D in $DRIVERS 00:01:23.068 + sudo modprobe ice 00:01:23.068 + for D in $DRIVERS 00:01:23.068 + sudo modprobe irdma 00:01:23.328 + exit 0 00:01:23.337 [Pipeline] } 00:01:23.355 [Pipeline] // withEnv 00:01:23.360 [Pipeline] } 00:01:23.375 [Pipeline] // stage 00:01:23.385 [Pipeline] catchError 00:01:23.386 [Pipeline] { 00:01:23.400 [Pipeline] timeout 00:01:23.400 Timeout set to expire in 1 hr 0 min 00:01:23.402 [Pipeline] { 00:01:23.417 [Pipeline] stage 00:01:23.420 [Pipeline] { (Tests) 00:01:23.433 [Pipeline] sh 00:01:23.721 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:23.721 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:23.721 + DIR_ROOT=/var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:23.721 + [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest ]] 00:01:23.721 + DIR_SPDK=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:23.721 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-cvl-phy-autotest/output 00:01:23.721 + [[ -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk ]] 00:01:23.721 + [[ ! -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/output ]] 00:01:23.721 + mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/output 00:01:23.721 + [[ -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/output ]] 00:01:23.721 + [[ nvmf-cvl-phy-autotest == pkgdep-* ]] 00:01:23.721 + cd /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:23.721 + source /etc/os-release 00:01:23.721 ++ NAME='Fedora Linux' 00:01:23.721 ++ VERSION='39 (Cloud Edition)' 00:01:23.721 ++ ID=fedora 00:01:23.721 ++ VERSION_ID=39 00:01:23.721 ++ VERSION_CODENAME= 00:01:23.721 ++ PLATFORM_ID=platform:f39 00:01:23.721 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:23.721 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:23.721 ++ LOGO=fedora-logo-icon 00:01:23.721 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:23.721 ++ HOME_URL=https://fedoraproject.org/ 00:01:23.721 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:23.721 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:23.721 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:23.721 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:23.721 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:23.721 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:23.721 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:23.721 ++ SUPPORT_END=2024-11-12 00:01:23.721 ++ VARIANT='Cloud Edition' 00:01:23.721 ++ VARIANT_ID=cloud 00:01:23.721 + uname -a 00:01:23.721 Linux spdk-wfp-03 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:23.721 + sudo /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:01:26.261 Hugepages 00:01:26.261 node hugesize free / total 00:01:26.261 node0 1048576kB 0 / 0 00:01:26.261 node0 2048kB 0 / 0 00:01:26.261 node1 1048576kB 0 / 0 00:01:26.261 node1 2048kB 0 / 0 00:01:26.261 00:01:26.261 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.261 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:26.261 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:26.261 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:26.261 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:26.261 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:26.261 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:26.261 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:26.261 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:26.261 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:26.261 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:01:26.261 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:26.261 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:26.261 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:26.261 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:26.261 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:26.261 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:26.261 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:26.261 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:26.261 + rm -f /tmp/spdk-ld-path 00:01:26.261 + source autorun-spdk.conf 00:01:26.261 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.261 ++ SPDK_TEST_NVMF=1 00:01:26.261 ++ SPDK_TEST_NVME_CLI=1 00:01:26.261 ++ SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:26.261 ++ SPDK_TEST_NVMF_NICS=e810 00:01:26.261 ++ SPDK_RUN_ASAN=1 00:01:26.261 ++ SPDK_RUN_UBSAN=1 00:01:26.261 ++ NET_TYPE=phy 00:01:26.261 ++ RUN_NIGHTLY=1 00:01:26.261 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.261 + [[ -n '' ]] 00:01:26.261 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:26.521 + for M in /var/spdk/build-*-manifest.txt 00:01:26.521 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:26.521 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:01:26.521 + for M in /var/spdk/build-*-manifest.txt 00:01:26.521 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.521 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:01:26.521 + for M in /var/spdk/build-*-manifest.txt 00:01:26.521 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.521 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:01:26.521 ++ uname 00:01:26.521 + [[ Linux == \L\i\n\u\x ]] 00:01:26.521 + sudo dmesg -T 00:01:26.521 + sudo dmesg --clear 00:01:26.521 + dmesg_pid=62348 00:01:26.521 + [[ Fedora Linux == FreeBSD ]] 00:01:26.521 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.521 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.521 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:26.521 + sudo dmesg -Tw 00:01:26.521 + [[ -x /usr/src/fio-static/fio ]] 00:01:26.521 + export FIO_BIN=/usr/src/fio-static/fio 00:01:26.521 + FIO_BIN=/usr/src/fio-static/fio 00:01:26.521 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\c\v\l\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:26.521 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:26.521 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:26.521 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.521 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.521 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:26.521 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.521 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.521 + spdk/autorun.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:01:26.521 00:45:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:26.522 00:45:33 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:01:26.522 00:45:33 -- nvmf-cvl-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.522 00:45:33 -- nvmf-cvl-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:26.522 00:45:33 -- nvmf-cvl-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:26.522 00:45:33 -- nvmf-cvl-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:26.522 00:45:33 -- nvmf-cvl-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:26.522 00:45:33 -- nvmf-cvl-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:26.522 00:45:33 -- nvmf-cvl-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:26.522 00:45:33 -- nvmf-cvl-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:26.522 00:45:33 -- nvmf-cvl-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:26.522 00:45:33 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:26.522 00:45:33 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:01:26.781 00:45:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:26.781 00:45:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:01:26.781 00:45:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:26.781 00:45:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:26.782 00:45:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:26.782 00:45:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:26.782 00:45:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.782 00:45:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.782 00:45:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.782 00:45:33 -- paths/export.sh@5 -- $ export PATH 00:01:26.782 00:45:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.782 00:45:33 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:01:26.782 00:45:33 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:26.782 00:45:33 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731973533.XXXXXX 00:01:26.782 00:45:33 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731973533.NmvK54 00:01:26.782 00:45:33 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:26.782 00:45:33 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:26.782 00:45:33 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/' 00:01:26.782 00:45:33 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:26.782 00:45:33 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:26.782 00:45:33 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:26.782 00:45:33 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:26.782 00:45:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.782 00:45:33 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:26.782 00:45:33 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:26.782 00:45:33 -- pm/common@17 -- $ local monitor 00:01:26.782 00:45:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.782 00:45:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.782 00:45:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.782 00:45:33 -- pm/common@21 -- $ date +%s 00:01:26.782 00:45:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.782 00:45:33 -- pm/common@21 -- $ date +%s 00:01:26.782 00:45:33 -- pm/common@25 -- $ sleep 1 00:01:26.782 00:45:33 -- pm/common@21 -- $ date +%s 00:01:26.782 00:45:33 -- pm/common@21 -- $ date +%s 00:01:26.782 00:45:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731973533 00:01:26.782 00:45:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731973533 00:01:26.782 00:45:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731973533 00:01:26.782 00:45:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731973533 00:01:26.782 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731973533_collect-cpu-load.pm.log 00:01:26.782 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731973533_collect-vmstat.pm.log 00:01:26.782 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731973533_collect-cpu-temp.pm.log 00:01:26.782 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731973533_collect-bmc-pm.bmc.pm.log 00:01:27.722 00:45:34 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:27.722 00:45:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:27.722 00:45:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:27.722 00:45:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:27.722 00:45:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:27.722 Mon Nov 18 11:45:34 PM UTC 2024 00:01:27.722 00:45:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:27.722 v25.01-pre-190-gd47eb51c9 00:01:27.722 00:45:34 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:27.722 00:45:34 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:27.722 00:45:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:27.722 00:45:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:27.722 00:45:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.722 ************************************ 00:01:27.722 START TEST asan 00:01:27.722 ************************************ 00:01:27.722 00:45:34 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:27.722 using asan 00:01:27.722 00:01:27.722 real 0m0.000s 00:01:27.722 user 0m0.000s 00:01:27.722 sys 0m0.000s 00:01:27.722 00:45:34 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:27.722 00:45:34 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.722 ************************************ 00:01:27.722 END TEST asan 00:01:27.722 ************************************ 00:01:27.722 00:45:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.722 00:45:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.723 00:45:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:27.723 00:45:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:27.723 00:45:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.723 ************************************ 00:01:27.723 START TEST ubsan 00:01:27.723 ************************************ 00:01:27.723 00:45:34 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:27.723 using ubsan 00:01:27.723 00:01:27.723 real 0m0.000s 00:01:27.723 user 0m0.000s 00:01:27.723 sys 0m0.000s 00:01:27.723 00:45:34 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:27.723 00:45:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.723 ************************************ 00:01:27.723 END TEST ubsan 00:01:27.723 ************************************ 00:01:27.983 00:45:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.983 00:45:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.983 00:45:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.983 00:45:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.983 00:45:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.983 00:45:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.983 00:45:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.983 00:45:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.983 00:45:34 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:27.983 Using default SPDK env in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:01:27.983 Using default DPDK in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:01:29.364 Using 'verbs' RDMA provider 00:01:45.194 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:57.415 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:57.415 Creating mk/config.mk...done. 00:01:57.415 Creating mk/cc.flags.mk...done. 00:01:57.415 Type 'make' to build. 00:01:57.415 00:46:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:57.415 00:46:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:57.415 00:46:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:57.415 00:46:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.415 ************************************ 00:01:57.415 START TEST make 00:01:57.415 ************************************ 00:01:57.415 00:46:03 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:57.415 make[1]: Nothing to be done for 'all'. 00:02:05.553 The Meson build system 00:02:05.553 Version: 1.5.0 00:02:05.553 Source dir: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk 00:02:05.553 Build dir: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp 00:02:05.553 Build type: native build 00:02:05.553 Program cat found: YES (/usr/bin/cat) 00:02:05.553 Project name: DPDK 00:02:05.553 Project version: 24.03.0 00:02:05.553 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.553 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.553 Host machine cpu family: x86_64 00:02:05.553 Host machine cpu: x86_64 00:02:05.553 Message: ## Building in Developer Mode ## 00:02:05.553 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.553 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.553 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.553 Program python3 found: YES (/usr/bin/python3) 00:02:05.553 Program cat found: YES (/usr/bin/cat) 00:02:05.553 Compiler for C supports arguments -march=native: YES 00:02:05.553 Checking for size of "void *" : 8 00:02:05.553 Checking for size of "void *" : 8 (cached) 00:02:05.553 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:05.553 Library m found: YES 00:02:05.553 Library numa found: YES 00:02:05.553 Has header "numaif.h" : YES 00:02:05.553 Library fdt found: NO 00:02:05.553 Library execinfo found: NO 00:02:05.553 Has header "execinfo.h" : YES 00:02:05.553 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.553 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.553 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.553 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.553 Run-time dependency openssl found: YES 3.1.1 00:02:05.553 Run-time dependency libpcap found: YES 1.10.4 00:02:05.553 Has header "pcap.h" with dependency libpcap: YES 00:02:05.553 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.553 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.553 Compiler for C supports arguments -Wformat: YES 00:02:05.553 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.553 Compiler for C supports arguments -Wformat-security: NO 00:02:05.553 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.553 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.553 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.553 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.553 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.553 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.553 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.553 Compiler for C supports arguments -Wundef: YES 00:02:05.553 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.553 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.553 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.553 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.553 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.553 Program objdump found: YES (/usr/bin/objdump) 00:02:05.553 Compiler for C supports arguments -mavx512f: YES 00:02:05.553 Checking if "AVX512 checking" compiles: YES 00:02:05.553 Fetching value of define "__SSE4_2__" : 1 00:02:05.553 Fetching value of define "__AES__" : 1 00:02:05.553 Fetching value of define "__AVX__" : 1 00:02:05.553 Fetching value of define "__AVX2__" : 1 00:02:05.553 Fetching value of define "__AVX512BW__" : 1 00:02:05.553 Fetching value of define "__AVX512CD__" : 1 00:02:05.553 Fetching value of define "__AVX512DQ__" : 1 00:02:05.553 Fetching value of define "__AVX512F__" : 1 00:02:05.553 Fetching value of define "__AVX512VL__" : 1 00:02:05.553 Fetching value of define "__PCLMUL__" : 1 00:02:05.553 Fetching value of define "__RDRND__" : 1 00:02:05.553 Fetching value of define "__RDSEED__" : 1 00:02:05.553 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.553 Fetching value of define "__znver1__" : (undefined) 00:02:05.553 Fetching value of define "__znver2__" : (undefined) 00:02:05.553 Fetching value of define "__znver3__" : (undefined) 00:02:05.553 Fetching value of define "__znver4__" : (undefined) 00:02:05.553 Library asan found: YES 00:02:05.553 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.553 Message: lib/log: Defining dependency "log" 00:02:05.553 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.553 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.553 Library rt found: YES 00:02:05.553 Checking for function "getentropy" : NO 00:02:05.553 Message: lib/eal: Defining dependency "eal" 00:02:05.553 Message: lib/ring: Defining dependency "ring" 00:02:05.553 Message: lib/rcu: Defining dependency "rcu" 00:02:05.553 Message: lib/mempool: Defining dependency "mempool" 00:02:05.553 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.553 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.553 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.553 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.553 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.553 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.553 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:05.553 Compiler for C supports arguments -mpclmul: YES 00:02:05.553 Compiler for C supports arguments -maes: YES 00:02:05.553 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.553 Compiler for C supports arguments -mavx512bw: YES 00:02:05.553 Compiler for C supports arguments -mavx512dq: YES 00:02:05.553 Compiler for C supports arguments -mavx512vl: YES 00:02:05.553 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.553 Compiler for C supports arguments -mavx2: YES 00:02:05.553 Compiler for C supports arguments -mavx: YES 00:02:05.553 Message: lib/net: Defining dependency "net" 00:02:05.553 Message: lib/meter: Defining dependency "meter" 00:02:05.553 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.553 Message: lib/pci: Defining dependency "pci" 00:02:05.553 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.553 Message: lib/hash: Defining dependency "hash" 00:02:05.553 Message: lib/timer: Defining dependency "timer" 00:02:05.553 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.553 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.553 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.553 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.553 Message: lib/power: Defining dependency "power" 00:02:05.554 Message: lib/reorder: Defining dependency "reorder" 00:02:05.554 Message: lib/security: Defining dependency "security" 00:02:05.554 Has header "linux/userfaultfd.h" : YES 00:02:05.554 Has header "linux/vduse.h" : YES 00:02:05.554 Message: lib/vhost: Defining dependency "vhost" 00:02:05.554 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.554 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.554 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.554 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.554 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.554 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.554 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.554 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.554 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.554 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.554 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.554 Configuring doxy-api-html.conf using configuration 00:02:05.554 Configuring doxy-api-man.conf using configuration 00:02:05.554 Program mandb found: YES (/usr/bin/mandb) 00:02:05.554 Program sphinx-build found: NO 00:02:05.554 Configuring rte_build_config.h using configuration 00:02:05.554 Message: 00:02:05.554 ================= 00:02:05.554 Applications Enabled 00:02:05.554 ================= 00:02:05.554 00:02:05.554 apps: 00:02:05.554 00:02:05.554 00:02:05.554 Message: 00:02:05.554 ================= 00:02:05.554 Libraries Enabled 00:02:05.554 ================= 00:02:05.554 00:02:05.554 libs: 00:02:05.554 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.554 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.554 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.554 00:02:05.554 Message: 00:02:05.554 =============== 00:02:05.554 Drivers Enabled 00:02:05.554 =============== 00:02:05.554 00:02:05.554 common: 00:02:05.554 00:02:05.554 bus: 00:02:05.554 pci, vdev, 00:02:05.554 mempool: 00:02:05.554 ring, 00:02:05.554 dma: 00:02:05.554 00:02:05.554 net: 00:02:05.554 00:02:05.554 crypto: 00:02:05.554 00:02:05.554 compress: 00:02:05.554 00:02:05.554 vdpa: 00:02:05.554 00:02:05.554 00:02:05.554 Message: 00:02:05.554 ================= 00:02:05.554 Content Skipped 00:02:05.554 ================= 00:02:05.554 00:02:05.554 apps: 00:02:05.554 dumpcap: explicitly disabled via build config 00:02:05.554 graph: explicitly disabled via build config 00:02:05.554 pdump: explicitly disabled via build config 00:02:05.554 proc-info: explicitly disabled via build config 00:02:05.554 test-acl: explicitly disabled via build config 00:02:05.554 test-bbdev: explicitly disabled via build config 00:02:05.554 test-cmdline: explicitly disabled via build config 00:02:05.554 test-compress-perf: explicitly disabled via build config 00:02:05.554 test-crypto-perf: explicitly disabled via build config 00:02:05.554 test-dma-perf: explicitly disabled via build config 00:02:05.554 test-eventdev: explicitly disabled via build config 00:02:05.554 test-fib: explicitly disabled via build config 00:02:05.554 test-flow-perf: explicitly disabled via build config 00:02:05.554 test-gpudev: explicitly disabled via build config 00:02:05.554 test-mldev: explicitly disabled via build config 00:02:05.554 test-pipeline: explicitly disabled via build config 00:02:05.554 test-pmd: explicitly disabled via build config 00:02:05.554 test-regex: explicitly disabled via build config 00:02:05.554 test-sad: explicitly disabled via build config 00:02:05.554 test-security-perf: explicitly disabled via build config 00:02:05.554 00:02:05.554 libs: 00:02:05.554 argparse: explicitly disabled via build config 00:02:05.554 metrics: explicitly disabled via build config 00:02:05.554 acl: explicitly disabled via build config 00:02:05.554 bbdev: explicitly disabled via build config 00:02:05.554 bitratestats: explicitly disabled via build config 00:02:05.554 bpf: explicitly disabled via build config 00:02:05.554 cfgfile: explicitly disabled via build config 00:02:05.554 distributor: explicitly disabled via build config 00:02:05.554 efd: explicitly disabled via build config 00:02:05.554 eventdev: explicitly disabled via build config 00:02:05.554 dispatcher: explicitly disabled via build config 00:02:05.554 gpudev: explicitly disabled via build config 00:02:05.554 gro: explicitly disabled via build config 00:02:05.554 gso: explicitly disabled via build config 00:02:05.554 ip_frag: explicitly disabled via build config 00:02:05.554 jobstats: explicitly disabled via build config 00:02:05.554 latencystats: explicitly disabled via build config 00:02:05.554 lpm: explicitly disabled via build config 00:02:05.554 member: explicitly disabled via build config 00:02:05.554 pcapng: explicitly disabled via build config 00:02:05.554 rawdev: explicitly disabled via build config 00:02:05.554 regexdev: explicitly disabled via build config 00:02:05.554 mldev: explicitly disabled via build config 00:02:05.554 rib: explicitly disabled via build config 00:02:05.554 sched: explicitly disabled via build config 00:02:05.554 stack: explicitly disabled via build config 00:02:05.554 ipsec: explicitly disabled via build config 00:02:05.554 pdcp: explicitly disabled via build config 00:02:05.554 fib: explicitly disabled via build config 00:02:05.554 port: explicitly disabled via build config 00:02:05.554 pdump: explicitly disabled via build config 00:02:05.554 table: explicitly disabled via build config 00:02:05.554 pipeline: explicitly disabled via build config 00:02:05.554 graph: explicitly disabled via build config 00:02:05.554 node: explicitly disabled via build config 00:02:05.554 00:02:05.554 drivers: 00:02:05.554 common/cpt: not in enabled drivers build config 00:02:05.554 common/dpaax: not in enabled drivers build config 00:02:05.554 common/iavf: not in enabled drivers build config 00:02:05.554 common/idpf: not in enabled drivers build config 00:02:05.554 common/ionic: not in enabled drivers build config 00:02:05.554 common/mvep: not in enabled drivers build config 00:02:05.554 common/octeontx: not in enabled drivers build config 00:02:05.554 bus/auxiliary: not in enabled drivers build config 00:02:05.554 bus/cdx: not in enabled drivers build config 00:02:05.554 bus/dpaa: not in enabled drivers build config 00:02:05.554 bus/fslmc: not in enabled drivers build config 00:02:05.554 bus/ifpga: not in enabled drivers build config 00:02:05.554 bus/platform: not in enabled drivers build config 00:02:05.554 bus/uacce: not in enabled drivers build config 00:02:05.554 bus/vmbus: not in enabled drivers build config 00:02:05.554 common/cnxk: not in enabled drivers build config 00:02:05.554 common/mlx5: not in enabled drivers build config 00:02:05.554 common/nfp: not in enabled drivers build config 00:02:05.554 common/nitrox: not in enabled drivers build config 00:02:05.554 common/qat: not in enabled drivers build config 00:02:05.554 common/sfc_efx: not in enabled drivers build config 00:02:05.554 mempool/bucket: not in enabled drivers build config 00:02:05.554 mempool/cnxk: not in enabled drivers build config 00:02:05.554 mempool/dpaa: not in enabled drivers build config 00:02:05.554 mempool/dpaa2: not in enabled drivers build config 00:02:05.554 mempool/octeontx: not in enabled drivers build config 00:02:05.554 mempool/stack: not in enabled drivers build config 00:02:05.554 dma/cnxk: not in enabled drivers build config 00:02:05.554 dma/dpaa: not in enabled drivers build config 00:02:05.554 dma/dpaa2: not in enabled drivers build config 00:02:05.554 dma/hisilicon: not in enabled drivers build config 00:02:05.554 dma/idxd: not in enabled drivers build config 00:02:05.554 dma/ioat: not in enabled drivers build config 00:02:05.554 dma/skeleton: not in enabled drivers build config 00:02:05.554 net/af_packet: not in enabled drivers build config 00:02:05.554 net/af_xdp: not in enabled drivers build config 00:02:05.554 net/ark: not in enabled drivers build config 00:02:05.554 net/atlantic: not in enabled drivers build config 00:02:05.554 net/avp: not in enabled drivers build config 00:02:05.554 net/axgbe: not in enabled drivers build config 00:02:05.554 net/bnx2x: not in enabled drivers build config 00:02:05.554 net/bnxt: not in enabled drivers build config 00:02:05.554 net/bonding: not in enabled drivers build config 00:02:05.554 net/cnxk: not in enabled drivers build config 00:02:05.554 net/cpfl: not in enabled drivers build config 00:02:05.554 net/cxgbe: not in enabled drivers build config 00:02:05.554 net/dpaa: not in enabled drivers build config 00:02:05.554 net/dpaa2: not in enabled drivers build config 00:02:05.554 net/e1000: not in enabled drivers build config 00:02:05.554 net/ena: not in enabled drivers build config 00:02:05.554 net/enetc: not in enabled drivers build config 00:02:05.554 net/enetfec: not in enabled drivers build config 00:02:05.554 net/enic: not in enabled drivers build config 00:02:05.554 net/failsafe: not in enabled drivers build config 00:02:05.554 net/fm10k: not in enabled drivers build config 00:02:05.554 net/gve: not in enabled drivers build config 00:02:05.554 net/hinic: not in enabled drivers build config 00:02:05.554 net/hns3: not in enabled drivers build config 00:02:05.554 net/i40e: not in enabled drivers build config 00:02:05.554 net/iavf: not in enabled drivers build config 00:02:05.554 net/ice: not in enabled drivers build config 00:02:05.554 net/idpf: not in enabled drivers build config 00:02:05.554 net/igc: not in enabled drivers build config 00:02:05.554 net/ionic: not in enabled drivers build config 00:02:05.554 net/ipn3ke: not in enabled drivers build config 00:02:05.554 net/ixgbe: not in enabled drivers build config 00:02:05.554 net/mana: not in enabled drivers build config 00:02:05.554 net/memif: not in enabled drivers build config 00:02:05.554 net/mlx4: not in enabled drivers build config 00:02:05.554 net/mlx5: not in enabled drivers build config 00:02:05.554 net/mvneta: not in enabled drivers build config 00:02:05.554 net/mvpp2: not in enabled drivers build config 00:02:05.555 net/netvsc: not in enabled drivers build config 00:02:05.555 net/nfb: not in enabled drivers build config 00:02:05.555 net/nfp: not in enabled drivers build config 00:02:05.555 net/ngbe: not in enabled drivers build config 00:02:05.555 net/null: not in enabled drivers build config 00:02:05.555 net/octeontx: not in enabled drivers build config 00:02:05.555 net/octeon_ep: not in enabled drivers build config 00:02:05.555 net/pcap: not in enabled drivers build config 00:02:05.555 net/pfe: not in enabled drivers build config 00:02:05.555 net/qede: not in enabled drivers build config 00:02:05.555 net/ring: not in enabled drivers build config 00:02:05.555 net/sfc: not in enabled drivers build config 00:02:05.555 net/softnic: not in enabled drivers build config 00:02:05.555 net/tap: not in enabled drivers build config 00:02:05.555 net/thunderx: not in enabled drivers build config 00:02:05.555 net/txgbe: not in enabled drivers build config 00:02:05.555 net/vdev_netvsc: not in enabled drivers build config 00:02:05.555 net/vhost: not in enabled drivers build config 00:02:05.555 net/virtio: not in enabled drivers build config 00:02:05.555 net/vmxnet3: not in enabled drivers build config 00:02:05.555 raw/*: missing internal dependency, "rawdev" 00:02:05.555 crypto/armv8: not in enabled drivers build config 00:02:05.555 crypto/bcmfs: not in enabled drivers build config 00:02:05.555 crypto/caam_jr: not in enabled drivers build config 00:02:05.555 crypto/ccp: not in enabled drivers build config 00:02:05.555 crypto/cnxk: not in enabled drivers build config 00:02:05.555 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.555 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.555 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.555 crypto/mlx5: not in enabled drivers build config 00:02:05.555 crypto/mvsam: not in enabled drivers build config 00:02:05.555 crypto/nitrox: not in enabled drivers build config 00:02:05.555 crypto/null: not in enabled drivers build config 00:02:05.555 crypto/octeontx: not in enabled drivers build config 00:02:05.555 crypto/openssl: not in enabled drivers build config 00:02:05.555 crypto/scheduler: not in enabled drivers build config 00:02:05.555 crypto/uadk: not in enabled drivers build config 00:02:05.555 crypto/virtio: not in enabled drivers build config 00:02:05.555 compress/isal: not in enabled drivers build config 00:02:05.555 compress/mlx5: not in enabled drivers build config 00:02:05.555 compress/nitrox: not in enabled drivers build config 00:02:05.555 compress/octeontx: not in enabled drivers build config 00:02:05.555 compress/zlib: not in enabled drivers build config 00:02:05.555 regex/*: missing internal dependency, "regexdev" 00:02:05.555 ml/*: missing internal dependency, "mldev" 00:02:05.555 vdpa/ifc: not in enabled drivers build config 00:02:05.555 vdpa/mlx5: not in enabled drivers build config 00:02:05.555 vdpa/nfp: not in enabled drivers build config 00:02:05.555 vdpa/sfc: not in enabled drivers build config 00:02:05.555 event/*: missing internal dependency, "eventdev" 00:02:05.555 baseband/*: missing internal dependency, "bbdev" 00:02:05.555 gpu/*: missing internal dependency, "gpudev" 00:02:05.555 00:02:05.555 00:02:05.815 Build targets in project: 85 00:02:05.815 00:02:05.815 DPDK 24.03.0 00:02:05.815 00:02:05.815 User defined options 00:02:05.815 buildtype : debug 00:02:05.815 default_library : shared 00:02:05.815 libdir : lib 00:02:05.815 prefix : /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:02:05.815 b_sanitize : address 00:02:05.815 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:05.815 c_link_args : 00:02:05.815 cpu_instruction_set: native 00:02:05.815 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:05.815 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:05.815 enable_docs : false 00:02:05.815 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:05.815 enable_kmods : false 00:02:05.815 max_lcores : 128 00:02:05.815 tests : false 00:02:05.815 00:02:05.815 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.393 ninja: Entering directory `/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp' 00:02:06.393 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.393 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:06.393 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.393 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:06.393 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:06.393 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:06.393 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.393 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.393 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:06.393 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.393 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.393 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.393 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.393 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:06.393 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:06.393 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:06.393 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:06.393 [18/268] Linking static target lib/librte_kvargs.a 00:02:06.393 [19/268] Linking static target lib/librte_log.a 00:02:06.654 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.654 [21/268] Linking static target lib/librte_pci.a 00:02:06.654 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.654 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.654 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.654 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.926 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.926 [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.926 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:06.926 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.926 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.926 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:06.926 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:06.926 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:06.926 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:06.926 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:06.926 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:06.926 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.926 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:06.926 [39/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:06.926 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.926 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:06.926 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.926 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:06.926 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:06.926 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:06.926 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:06.926 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:06.926 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.926 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.926 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.926 [51/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:06.926 [52/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:06.926 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.926 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:06.926 [55/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.926 [56/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:06.926 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:06.926 [58/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.926 [59/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.926 [60/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.926 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.926 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:06.926 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:06.926 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:06.926 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.927 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:06.927 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:06.927 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:06.927 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.927 [70/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.927 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:06.927 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:06.927 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:06.927 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:06.927 [75/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.189 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:07.189 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:07.189 [78/268] Linking static target lib/librte_ring.a 00:02:07.189 [79/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.189 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.189 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:07.189 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.189 [83/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.189 [84/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:07.189 [85/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:07.189 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.189 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:07.189 [88/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.189 [89/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.189 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:07.189 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.189 [92/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.189 [93/268] Linking static target lib/librte_telemetry.a 00:02:07.189 [94/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.189 [95/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:07.189 [96/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.189 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.189 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.189 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.189 [100/268] Linking static target lib/librte_meter.a 00:02:07.189 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:07.189 [102/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:07.189 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.189 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:07.189 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.189 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.189 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:07.189 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.189 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:07.189 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.189 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:07.189 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:07.189 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:07.189 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:07.189 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:07.189 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.189 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.189 [118/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.189 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:07.190 [120/268] Linking static target lib/librte_cmdline.a 00:02:07.190 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:07.190 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.448 [123/268] Linking static target lib/librte_mempool.a 00:02:07.448 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:07.448 [125/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.448 [126/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.448 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.448 [128/268] Linking static target lib/librte_net.a 00:02:07.448 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.448 [130/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:07.448 [131/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.448 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.448 [133/268] Linking target lib/librte_log.so.24.1 00:02:07.448 [134/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.448 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.448 [136/268] Linking static target lib/librte_rcu.a 00:02:07.448 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:07.448 [138/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.448 [139/268] Linking static target lib/librte_eal.a 00:02:07.448 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:07.448 [141/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.448 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:07.448 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.448 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.448 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:07.448 [146/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:07.448 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.448 [148/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:07.448 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:07.448 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:07.449 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.449 [152/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:07.449 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.449 [154/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:07.449 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:07.449 [156/268] Linking static target lib/librte_dmadev.a 00:02:07.449 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.449 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.449 [159/268] Linking static target lib/librte_timer.a 00:02:07.708 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:07.708 [161/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:07.708 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:07.708 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:07.708 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:07.708 [165/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.708 [166/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:07.708 [167/268] Linking target lib/librte_kvargs.so.24.1 00:02:07.708 [168/268] Linking target lib/librte_telemetry.so.24.1 00:02:07.708 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:07.708 [170/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.708 [171/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.708 [172/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:07.708 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:07.708 [174/268] Linking static target lib/librte_reorder.a 00:02:07.708 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:07.708 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.708 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:07.708 [178/268] Linking static target lib/librte_compressdev.a 00:02:07.708 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:07.709 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:07.709 [181/268] Linking static target lib/librte_power.a 00:02:07.709 [182/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:07.709 [183/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:07.709 [184/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.709 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:07.709 [186/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:07.709 [187/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:07.709 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:07.968 [189/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.968 [190/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.968 [191/268] Linking static target drivers/librte_bus_vdev.a 00:02:07.968 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:07.968 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:07.968 [194/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:07.968 [195/268] Linking static target lib/librte_security.a 00:02:07.968 [196/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.968 [197/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.968 [198/268] Linking static target lib/librte_mbuf.a 00:02:07.968 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:07.968 [200/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.968 [201/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.968 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:07.968 [203/268] Linking static target drivers/librte_bus_pci.a 00:02:08.228 [204/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.228 [205/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.228 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:08.228 [207/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.228 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:08.228 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.228 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.228 [211/268] Linking static target lib/librte_hash.a 00:02:08.228 [212/268] Linking static target drivers/librte_mempool_ring.a 00:02:08.228 [213/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.228 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.228 [215/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.488 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:08.488 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.488 [218/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:08.488 [219/268] Linking static target lib/librte_cryptodev.a 00:02:08.488 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.488 [221/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.747 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.747 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.006 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.006 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:09.264 [226/268] Linking static target lib/librte_ethdev.a 00:02:10.199 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.199 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.493 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:13.493 [230/268] Linking static target lib/librte_vhost.a 00:02:15.401 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.308 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.308 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.567 [234/268] Linking target lib/librte_eal.so.24.1 00:02:17.567 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:17.567 [236/268] Linking target lib/librte_pci.so.24.1 00:02:17.567 [237/268] Linking target lib/librte_ring.so.24.1 00:02:17.567 [238/268] Linking target lib/librte_meter.so.24.1 00:02:17.567 [239/268] Linking target lib/librte_timer.so.24.1 00:02:17.567 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:17.567 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:17.826 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:17.826 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:17.826 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:17.826 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:17.826 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:17.826 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:17.826 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:17.826 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:18.084 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:18.084 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:18.084 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:18.084 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:18.084 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:18.344 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:18.344 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:18.344 [257/268] Linking target lib/librte_net.so.24.1 00:02:18.344 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:18.344 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:18.344 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:18.344 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:18.344 [262/268] Linking target lib/librte_hash.so.24.1 00:02:18.344 [263/268] Linking target lib/librte_security.so.24.1 00:02:18.344 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:18.604 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:18.604 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:18.604 [267/268] Linking target lib/librte_power.so.24.1 00:02:18.604 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:18.604 INFO: autodetecting backend as ninja 00:02:18.604 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:28.591 CC lib/ut_mock/mock.o 00:02:28.591 CC lib/log/log.o 00:02:28.591 CC lib/log/log_flags.o 00:02:28.591 CC lib/log/log_deprecated.o 00:02:28.591 CC lib/ut/ut.o 00:02:28.591 LIB libspdk_ut_mock.a 00:02:28.591 LIB libspdk_ut.a 00:02:28.591 LIB libspdk_log.a 00:02:28.591 SO libspdk_ut_mock.so.6.0 00:02:28.591 SO libspdk_ut.so.2.0 00:02:28.591 SO libspdk_log.so.7.1 00:02:28.591 SYMLINK libspdk_ut_mock.so 00:02:28.591 SYMLINK libspdk_ut.so 00:02:28.591 SYMLINK libspdk_log.so 00:02:28.849 CC lib/util/base64.o 00:02:28.849 CC lib/util/bit_array.o 00:02:28.849 CC lib/util/cpuset.o 00:02:28.849 CC lib/ioat/ioat.o 00:02:28.849 CC lib/util/crc16.o 00:02:28.849 CC lib/util/crc32.o 00:02:28.849 CC lib/util/crc32c.o 00:02:28.849 CC lib/dma/dma.o 00:02:28.849 CC lib/util/crc32_ieee.o 00:02:28.849 CC lib/util/crc64.o 00:02:28.849 CC lib/util/dif.o 00:02:28.849 CC lib/util/fd.o 00:02:28.849 CC lib/util/fd_group.o 00:02:28.849 CC lib/util/file.o 00:02:28.849 CC lib/util/hexlify.o 00:02:28.849 CC lib/util/pipe.o 00:02:28.849 CC lib/util/iov.o 00:02:28.849 CXX lib/trace_parser/trace.o 00:02:28.849 CC lib/util/net.o 00:02:28.849 CC lib/util/math.o 00:02:28.849 CC lib/util/strerror_tls.o 00:02:28.850 CC lib/util/string.o 00:02:28.850 CC lib/util/uuid.o 00:02:28.850 CC lib/util/xor.o 00:02:28.850 CC lib/util/zipf.o 00:02:28.850 CC lib/util/md5.o 00:02:29.108 CC lib/vfio_user/host/vfio_user_pci.o 00:02:29.108 CC lib/vfio_user/host/vfio_user.o 00:02:29.108 LIB libspdk_dma.a 00:02:29.108 SO libspdk_dma.so.5.0 00:02:29.108 SYMLINK libspdk_dma.so 00:02:29.108 LIB libspdk_ioat.a 00:02:29.108 SO libspdk_ioat.so.7.0 00:02:29.108 SYMLINK libspdk_ioat.so 00:02:29.367 LIB libspdk_vfio_user.a 00:02:29.367 SO libspdk_vfio_user.so.5.0 00:02:29.367 SYMLINK libspdk_vfio_user.so 00:02:29.367 LIB libspdk_util.a 00:02:29.367 SO libspdk_util.so.10.1 00:02:29.626 SYMLINK libspdk_util.so 00:02:29.886 CC lib/env_dpdk/memory.o 00:02:29.886 CC lib/env_dpdk/env.o 00:02:29.886 CC lib/env_dpdk/threads.o 00:02:29.886 CC lib/env_dpdk/pci.o 00:02:29.886 CC lib/env_dpdk/init.o 00:02:29.886 CC lib/env_dpdk/pci_ioat.o 00:02:29.886 CC lib/env_dpdk/pci_virtio.o 00:02:29.886 CC lib/env_dpdk/pci_vmd.o 00:02:29.886 CC lib/env_dpdk/pci_dpdk.o 00:02:29.886 CC lib/env_dpdk/sigbus_handler.o 00:02:29.886 CC lib/env_dpdk/pci_idxd.o 00:02:29.886 CC lib/env_dpdk/pci_event.o 00:02:29.886 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:29.886 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:29.886 CC lib/conf/conf.o 00:02:29.886 CC lib/rdma_utils/rdma_utils.o 00:02:29.886 CC lib/vmd/vmd.o 00:02:29.886 CC lib/vmd/led.o 00:02:29.886 CC lib/json/json_parse.o 00:02:29.886 CC lib/idxd/idxd.o 00:02:29.886 CC lib/idxd/idxd_user.o 00:02:29.886 CC lib/json/json_util.o 00:02:29.886 CC lib/json/json_write.o 00:02:29.886 CC lib/idxd/idxd_kernel.o 00:02:30.145 LIB libspdk_conf.a 00:02:30.145 LIB libspdk_rdma_utils.a 00:02:30.145 SO libspdk_conf.so.6.0 00:02:30.145 LIB libspdk_json.a 00:02:30.145 SO libspdk_rdma_utils.so.1.0 00:02:30.145 SO libspdk_json.so.6.0 00:02:30.145 SYMLINK libspdk_conf.so 00:02:30.145 SYMLINK libspdk_rdma_utils.so 00:02:30.404 SYMLINK libspdk_json.so 00:02:30.664 LIB libspdk_idxd.a 00:02:30.664 CC lib/rdma_provider/common.o 00:02:30.664 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:30.664 LIB libspdk_vmd.a 00:02:30.664 LIB libspdk_trace_parser.a 00:02:30.664 SO libspdk_idxd.so.12.1 00:02:30.664 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:30.664 CC lib/jsonrpc/jsonrpc_server.o 00:02:30.664 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:30.664 CC lib/jsonrpc/jsonrpc_client.o 00:02:30.664 SO libspdk_vmd.so.6.0 00:02:30.664 SO libspdk_trace_parser.so.6.0 00:02:30.664 SYMLINK libspdk_idxd.so 00:02:30.664 SYMLINK libspdk_vmd.so 00:02:30.664 SYMLINK libspdk_trace_parser.so 00:02:30.664 LIB libspdk_rdma_provider.a 00:02:30.923 SO libspdk_rdma_provider.so.7.0 00:02:30.923 LIB libspdk_jsonrpc.a 00:02:30.923 SYMLINK libspdk_rdma_provider.so 00:02:30.923 SO libspdk_jsonrpc.so.6.0 00:02:30.923 SYMLINK libspdk_jsonrpc.so 00:02:31.183 LIB libspdk_env_dpdk.a 00:02:31.183 CC lib/rpc/rpc.o 00:02:31.183 SO libspdk_env_dpdk.so.15.1 00:02:31.442 SYMLINK libspdk_env_dpdk.so 00:02:31.442 LIB libspdk_rpc.a 00:02:31.442 SO libspdk_rpc.so.6.0 00:02:31.442 SYMLINK libspdk_rpc.so 00:02:32.012 CC lib/trace/trace.o 00:02:32.012 CC lib/notify/notify.o 00:02:32.012 CC lib/trace/trace_flags.o 00:02:32.012 CC lib/keyring/keyring.o 00:02:32.012 CC lib/notify/notify_rpc.o 00:02:32.012 CC lib/trace/trace_rpc.o 00:02:32.012 CC lib/keyring/keyring_rpc.o 00:02:32.012 LIB libspdk_notify.a 00:02:32.012 SO libspdk_notify.so.6.0 00:02:32.012 LIB libspdk_keyring.a 00:02:32.012 LIB libspdk_trace.a 00:02:32.012 SO libspdk_keyring.so.2.0 00:02:32.012 SYMLINK libspdk_notify.so 00:02:32.272 SO libspdk_trace.so.11.0 00:02:32.272 SYMLINK libspdk_keyring.so 00:02:32.272 SYMLINK libspdk_trace.so 00:02:32.531 CC lib/thread/thread.o 00:02:32.531 CC lib/thread/iobuf.o 00:02:32.531 CC lib/sock/sock.o 00:02:32.531 CC lib/sock/sock_rpc.o 00:02:32.790 LIB libspdk_sock.a 00:02:33.048 SO libspdk_sock.so.10.0 00:02:33.048 SYMLINK libspdk_sock.so 00:02:33.307 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:33.307 CC lib/nvme/nvme_ctrlr.o 00:02:33.307 CC lib/nvme/nvme_fabric.o 00:02:33.307 CC lib/nvme/nvme_ns.o 00:02:33.307 CC lib/nvme/nvme_ns_cmd.o 00:02:33.307 CC lib/nvme/nvme_pcie_common.o 00:02:33.307 CC lib/nvme/nvme_pcie.o 00:02:33.307 CC lib/nvme/nvme_qpair.o 00:02:33.307 CC lib/nvme/nvme.o 00:02:33.307 CC lib/nvme/nvme_quirks.o 00:02:33.307 CC lib/nvme/nvme_transport.o 00:02:33.307 CC lib/nvme/nvme_discovery.o 00:02:33.307 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:33.307 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:33.307 CC lib/nvme/nvme_tcp.o 00:02:33.307 CC lib/nvme/nvme_opal.o 00:02:33.307 CC lib/nvme/nvme_io_msg.o 00:02:33.307 CC lib/nvme/nvme_poll_group.o 00:02:33.307 CC lib/nvme/nvme_zns.o 00:02:33.307 CC lib/nvme/nvme_stubs.o 00:02:33.307 CC lib/nvme/nvme_auth.o 00:02:33.307 CC lib/nvme/nvme_cuse.o 00:02:33.307 CC lib/nvme/nvme_rdma.o 00:02:33.874 LIB libspdk_thread.a 00:02:33.874 SO libspdk_thread.so.11.0 00:02:34.132 SYMLINK libspdk_thread.so 00:02:34.391 CC lib/fsdev/fsdev.o 00:02:34.391 CC lib/fsdev/fsdev_io.o 00:02:34.391 CC lib/fsdev/fsdev_rpc.o 00:02:34.391 CC lib/accel/accel.o 00:02:34.391 CC lib/accel/accel_rpc.o 00:02:34.391 CC lib/accel/accel_sw.o 00:02:34.391 CC lib/blob/blobstore.o 00:02:34.391 CC lib/blob/request.o 00:02:34.391 CC lib/blob/zeroes.o 00:02:34.391 CC lib/blob/blob_bs_dev.o 00:02:34.391 CC lib/virtio/virtio.o 00:02:34.391 CC lib/virtio/virtio_vhost_user.o 00:02:34.391 CC lib/virtio/virtio_vfio_user.o 00:02:34.391 CC lib/init/json_config.o 00:02:34.391 CC lib/virtio/virtio_pci.o 00:02:34.391 CC lib/init/subsystem.o 00:02:34.391 CC lib/init/subsystem_rpc.o 00:02:34.391 CC lib/init/rpc.o 00:02:34.651 LIB libspdk_init.a 00:02:34.651 SO libspdk_init.so.6.0 00:02:34.651 LIB libspdk_virtio.a 00:02:34.651 SYMLINK libspdk_init.so 00:02:34.651 SO libspdk_virtio.so.7.0 00:02:34.910 SYMLINK libspdk_virtio.so 00:02:34.910 LIB libspdk_fsdev.a 00:02:34.910 SO libspdk_fsdev.so.2.0 00:02:34.910 CC lib/event/app.o 00:02:34.910 CC lib/event/reactor.o 00:02:34.910 CC lib/event/log_rpc.o 00:02:34.910 CC lib/event/app_rpc.o 00:02:34.910 CC lib/event/scheduler_static.o 00:02:35.169 SYMLINK libspdk_fsdev.so 00:02:35.428 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:35.428 LIB libspdk_accel.a 00:02:35.428 LIB libspdk_nvme.a 00:02:35.428 SO libspdk_accel.so.16.0 00:02:35.428 LIB libspdk_event.a 00:02:35.428 SO libspdk_event.so.14.0 00:02:35.428 SYMLINK libspdk_accel.so 00:02:35.428 SO libspdk_nvme.so.15.0 00:02:35.687 SYMLINK libspdk_event.so 00:02:35.687 SYMLINK libspdk_nvme.so 00:02:35.687 CC lib/bdev/bdev.o 00:02:35.687 CC lib/bdev/bdev_rpc.o 00:02:35.687 CC lib/bdev/bdev_zone.o 00:02:35.687 CC lib/bdev/part.o 00:02:35.687 CC lib/bdev/scsi_nvme.o 00:02:35.955 LIB libspdk_fuse_dispatcher.a 00:02:35.956 SO libspdk_fuse_dispatcher.so.1.0 00:02:35.956 SYMLINK libspdk_fuse_dispatcher.so 00:02:37.340 LIB libspdk_blob.a 00:02:37.340 SO libspdk_blob.so.11.0 00:02:37.340 SYMLINK libspdk_blob.so 00:02:37.909 CC lib/blobfs/blobfs.o 00:02:37.909 CC lib/blobfs/tree.o 00:02:37.909 CC lib/lvol/lvol.o 00:02:38.169 LIB libspdk_bdev.a 00:02:38.169 SO libspdk_bdev.so.17.0 00:02:38.169 SYMLINK libspdk_bdev.so 00:02:38.428 LIB libspdk_blobfs.a 00:02:38.687 SO libspdk_blobfs.so.10.0 00:02:38.687 CC lib/scsi/dev.o 00:02:38.687 CC lib/scsi/lun.o 00:02:38.687 CC lib/scsi/port.o 00:02:38.687 CC lib/scsi/scsi.o 00:02:38.687 CC lib/nvmf/ctrlr.o 00:02:38.687 CC lib/ublk/ublk.o 00:02:38.687 CC lib/nvmf/ctrlr_discovery.o 00:02:38.687 CC lib/scsi/scsi_bdev.o 00:02:38.687 CC lib/scsi/scsi_pr.o 00:02:38.687 CC lib/nvmf/ctrlr_bdev.o 00:02:38.687 CC lib/ublk/ublk_rpc.o 00:02:38.687 CC lib/scsi/scsi_rpc.o 00:02:38.687 CC lib/ftl/ftl_core.o 00:02:38.687 CC lib/nvmf/subsystem.o 00:02:38.687 CC lib/scsi/task.o 00:02:38.687 CC lib/ftl/ftl_init.o 00:02:38.687 CC lib/nvmf/nvmf.o 00:02:38.687 CC lib/nvmf/nvmf_rpc.o 00:02:38.687 CC lib/ftl/ftl_layout.o 00:02:38.687 CC lib/nvmf/transport.o 00:02:38.687 CC lib/ftl/ftl_debug.o 00:02:38.687 CC lib/nvmf/tcp.o 00:02:38.687 CC lib/ftl/ftl_io.o 00:02:38.687 CC lib/nvmf/stubs.o 00:02:38.687 CC lib/ftl/ftl_sb.o 00:02:38.687 CC lib/nbd/nbd.o 00:02:38.687 CC lib/nbd/nbd_rpc.o 00:02:38.687 CC lib/nvmf/mdns_server.o 00:02:38.687 CC lib/nvmf/rdma.o 00:02:38.687 CC lib/ftl/ftl_l2p.o 00:02:38.687 CC lib/nvmf/auth.o 00:02:38.687 CC lib/ftl/ftl_l2p_flat.o 00:02:38.687 CC lib/ftl/ftl_band.o 00:02:38.687 CC lib/ftl/ftl_nv_cache.o 00:02:38.687 CC lib/ftl/ftl_band_ops.o 00:02:38.687 CC lib/ftl/ftl_writer.o 00:02:38.687 CC lib/ftl/ftl_rq.o 00:02:38.687 CC lib/ftl/ftl_reloc.o 00:02:38.687 CC lib/ftl/ftl_l2p_cache.o 00:02:38.687 CC lib/ftl/ftl_p2l.o 00:02:38.687 CC lib/ftl/ftl_p2l_log.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:38.687 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:38.687 CC lib/ftl/utils/ftl_conf.o 00:02:38.687 CC lib/ftl/utils/ftl_md.o 00:02:38.687 CC lib/ftl/utils/ftl_mempool.o 00:02:38.687 LIB libspdk_lvol.a 00:02:38.687 CC lib/ftl/utils/ftl_bitmap.o 00:02:38.687 CC lib/ftl/utils/ftl_property.o 00:02:38.687 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:38.687 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:38.687 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:38.687 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:38.687 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:38.687 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:38.687 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:38.687 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:38.687 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:38.688 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:38.688 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:38.688 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:38.688 CC lib/ftl/base/ftl_base_dev.o 00:02:38.688 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:38.688 CC lib/ftl/base/ftl_base_bdev.o 00:02:38.688 CC lib/ftl/ftl_trace.o 00:02:38.688 SYMLINK libspdk_blobfs.so 00:02:38.688 SO libspdk_lvol.so.10.0 00:02:38.947 SYMLINK libspdk_lvol.so 00:02:39.206 LIB libspdk_scsi.a 00:02:39.206 LIB libspdk_nbd.a 00:02:39.465 SO libspdk_scsi.so.9.0 00:02:39.465 SO libspdk_nbd.so.7.0 00:02:39.465 SYMLINK libspdk_nbd.so 00:02:39.465 SYMLINK libspdk_scsi.so 00:02:39.465 LIB libspdk_ublk.a 00:02:39.723 SO libspdk_ublk.so.3.0 00:02:39.723 SYMLINK libspdk_ublk.so 00:02:39.723 CC lib/vhost/vhost_blk.o 00:02:39.723 CC lib/vhost/vhost_scsi.o 00:02:39.723 CC lib/vhost/vhost.o 00:02:39.723 CC lib/vhost/vhost_rpc.o 00:02:39.723 CC lib/vhost/rte_vhost_user.o 00:02:39.723 CC lib/iscsi/conn.o 00:02:39.723 CC lib/iscsi/init_grp.o 00:02:39.723 CC lib/iscsi/iscsi.o 00:02:39.723 CC lib/iscsi/param.o 00:02:39.723 CC lib/iscsi/portal_grp.o 00:02:39.723 CC lib/iscsi/tgt_node.o 00:02:39.723 CC lib/iscsi/iscsi_subsystem.o 00:02:39.723 CC lib/iscsi/iscsi_rpc.o 00:02:39.723 CC lib/iscsi/task.o 00:02:39.982 LIB libspdk_ftl.a 00:02:39.982 SO libspdk_ftl.so.9.0 00:02:40.241 SYMLINK libspdk_ftl.so 00:02:40.810 LIB libspdk_vhost.a 00:02:40.810 SO libspdk_vhost.so.8.0 00:02:40.810 SYMLINK libspdk_vhost.so 00:02:40.810 LIB libspdk_nvmf.a 00:02:41.069 SO libspdk_nvmf.so.20.0 00:02:41.069 LIB libspdk_iscsi.a 00:02:41.069 SO libspdk_iscsi.so.8.0 00:02:41.069 SYMLINK libspdk_nvmf.so 00:02:41.329 SYMLINK libspdk_iscsi.so 00:02:41.898 CC module/env_dpdk/env_dpdk_rpc.o 00:02:41.898 CC module/sock/posix/posix.o 00:02:41.898 CC module/blob/bdev/blob_bdev.o 00:02:41.898 CC module/accel/error/accel_error.o 00:02:41.898 CC module/accel/ioat/accel_ioat.o 00:02:41.898 CC module/accel/ioat/accel_ioat_rpc.o 00:02:41.898 CC module/accel/error/accel_error_rpc.o 00:02:41.898 CC module/accel/iaa/accel_iaa.o 00:02:41.898 CC module/accel/iaa/accel_iaa_rpc.o 00:02:41.898 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:41.898 CC module/keyring/linux/keyring.o 00:02:41.898 CC module/keyring/linux/keyring_rpc.o 00:02:41.898 CC module/scheduler/gscheduler/gscheduler.o 00:02:41.898 CC module/fsdev/aio/fsdev_aio.o 00:02:41.898 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:41.898 LIB libspdk_env_dpdk_rpc.a 00:02:41.898 CC module/fsdev/aio/linux_aio_mgr.o 00:02:41.898 CC module/keyring/file/keyring.o 00:02:41.898 CC module/keyring/file/keyring_rpc.o 00:02:41.898 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:41.898 CC module/accel/dsa/accel_dsa.o 00:02:41.898 CC module/accel/dsa/accel_dsa_rpc.o 00:02:41.898 SO libspdk_env_dpdk_rpc.so.6.0 00:02:41.898 SYMLINK libspdk_env_dpdk_rpc.so 00:02:42.157 LIB libspdk_keyring_file.a 00:02:42.157 LIB libspdk_scheduler_gscheduler.a 00:02:42.157 LIB libspdk_keyring_linux.a 00:02:42.157 SO libspdk_keyring_file.so.2.0 00:02:42.157 LIB libspdk_scheduler_dpdk_governor.a 00:02:42.157 LIB libspdk_accel_ioat.a 00:02:42.157 SO libspdk_scheduler_gscheduler.so.4.0 00:02:42.157 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:42.157 SO libspdk_keyring_linux.so.1.0 00:02:42.157 LIB libspdk_accel_error.a 00:02:42.157 LIB libspdk_scheduler_dynamic.a 00:02:42.157 SO libspdk_accel_ioat.so.6.0 00:02:42.157 LIB libspdk_accel_iaa.a 00:02:42.157 SYMLINK libspdk_keyring_file.so 00:02:42.157 SO libspdk_accel_error.so.2.0 00:02:42.157 SYMLINK libspdk_scheduler_gscheduler.so 00:02:42.157 SO libspdk_scheduler_dynamic.so.4.0 00:02:42.157 SO libspdk_accel_iaa.so.3.0 00:02:42.157 SYMLINK libspdk_keyring_linux.so 00:02:42.157 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:42.157 SYMLINK libspdk_accel_ioat.so 00:02:42.157 LIB libspdk_blob_bdev.a 00:02:42.157 SYMLINK libspdk_accel_error.so 00:02:42.157 LIB libspdk_accel_dsa.a 00:02:42.157 SYMLINK libspdk_scheduler_dynamic.so 00:02:42.157 SYMLINK libspdk_accel_iaa.so 00:02:42.157 SO libspdk_blob_bdev.so.11.0 00:02:42.157 SO libspdk_accel_dsa.so.5.0 00:02:42.417 SYMLINK libspdk_blob_bdev.so 00:02:42.417 SYMLINK libspdk_accel_dsa.so 00:02:42.692 LIB libspdk_fsdev_aio.a 00:02:42.692 SO libspdk_fsdev_aio.so.1.0 00:02:42.692 LIB libspdk_sock_posix.a 00:02:42.692 SO libspdk_sock_posix.so.6.0 00:02:42.692 SYMLINK libspdk_fsdev_aio.so 00:02:42.692 CC module/bdev/delay/vbdev_delay.o 00:02:42.692 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:42.692 CC module/bdev/gpt/gpt.o 00:02:42.692 CC module/bdev/gpt/vbdev_gpt.o 00:02:42.692 CC module/bdev/split/vbdev_split.o 00:02:42.692 CC module/bdev/split/vbdev_split_rpc.o 00:02:42.692 CC module/bdev/lvol/vbdev_lvol.o 00:02:42.692 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:42.692 CC module/bdev/malloc/bdev_malloc.o 00:02:42.692 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:42.692 CC module/bdev/raid/bdev_raid.o 00:02:42.692 CC module/blobfs/bdev/blobfs_bdev.o 00:02:42.692 CC module/bdev/nvme/bdev_nvme.o 00:02:42.692 CC module/bdev/raid/bdev_raid_sb.o 00:02:42.692 CC module/bdev/raid/bdev_raid_rpc.o 00:02:42.692 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:42.692 CC module/bdev/passthru/vbdev_passthru.o 00:02:42.692 CC module/bdev/iscsi/bdev_iscsi.o 00:02:42.692 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:42.692 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:42.692 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:42.692 CC module/bdev/nvme/nvme_rpc.o 00:02:42.692 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:42.692 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:42.692 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:42.692 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:42.692 CC module/bdev/raid/raid0.o 00:02:42.692 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:42.692 CC module/bdev/nvme/bdev_mdns_client.o 00:02:42.692 CC module/bdev/nvme/vbdev_opal.o 00:02:42.692 CC module/bdev/raid/raid1.o 00:02:42.692 CC module/bdev/raid/concat.o 00:02:42.692 CC module/bdev/null/bdev_null.o 00:02:42.692 CC module/bdev/ftl/bdev_ftl.o 00:02:42.692 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:42.692 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:42.692 CC module/bdev/null/bdev_null_rpc.o 00:02:42.692 CC module/bdev/error/vbdev_error.o 00:02:42.692 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:42.692 CC module/bdev/aio/bdev_aio.o 00:02:42.692 CC module/bdev/error/vbdev_error_rpc.o 00:02:42.692 SYMLINK libspdk_sock_posix.so 00:02:42.692 CC module/bdev/aio/bdev_aio_rpc.o 00:02:42.956 LIB libspdk_blobfs_bdev.a 00:02:42.956 SO libspdk_blobfs_bdev.so.6.0 00:02:42.956 LIB libspdk_bdev_split.a 00:02:43.215 LIB libspdk_bdev_gpt.a 00:02:43.215 SO libspdk_bdev_split.so.6.0 00:02:43.215 SYMLINK libspdk_blobfs_bdev.so 00:02:43.215 LIB libspdk_bdev_null.a 00:02:43.215 SO libspdk_bdev_gpt.so.6.0 00:02:43.215 LIB libspdk_bdev_error.a 00:02:43.215 LIB libspdk_bdev_ftl.a 00:02:43.215 SYMLINK libspdk_bdev_split.so 00:02:43.215 SO libspdk_bdev_null.so.6.0 00:02:43.215 LIB libspdk_bdev_zone_block.a 00:02:43.215 LIB libspdk_bdev_passthru.a 00:02:43.215 SO libspdk_bdev_error.so.6.0 00:02:43.215 SO libspdk_bdev_ftl.so.6.0 00:02:43.215 SYMLINK libspdk_bdev_gpt.so 00:02:43.215 LIB libspdk_bdev_delay.a 00:02:43.215 SO libspdk_bdev_zone_block.so.6.0 00:02:43.215 SO libspdk_bdev_passthru.so.6.0 00:02:43.215 LIB libspdk_bdev_aio.a 00:02:43.215 SYMLINK libspdk_bdev_null.so 00:02:43.215 LIB libspdk_bdev_iscsi.a 00:02:43.215 SO libspdk_bdev_delay.so.6.0 00:02:43.215 SO libspdk_bdev_aio.so.6.0 00:02:43.215 SYMLINK libspdk_bdev_ftl.so 00:02:43.215 SYMLINK libspdk_bdev_error.so 00:02:43.215 LIB libspdk_bdev_malloc.a 00:02:43.215 SO libspdk_bdev_iscsi.so.6.0 00:02:43.215 SYMLINK libspdk_bdev_passthru.so 00:02:43.215 SYMLINK libspdk_bdev_zone_block.so 00:02:43.215 SO libspdk_bdev_malloc.so.6.0 00:02:43.215 SYMLINK libspdk_bdev_delay.so 00:02:43.215 SYMLINK libspdk_bdev_aio.so 00:02:43.215 SYMLINK libspdk_bdev_iscsi.so 00:02:43.215 SYMLINK libspdk_bdev_malloc.so 00:02:43.475 LIB libspdk_bdev_lvol.a 00:02:43.475 SO libspdk_bdev_lvol.so.6.0 00:02:43.475 LIB libspdk_bdev_virtio.a 00:02:43.475 SO libspdk_bdev_virtio.so.6.0 00:02:43.475 SYMLINK libspdk_bdev_lvol.so 00:02:43.475 SYMLINK libspdk_bdev_virtio.so 00:02:43.734 LIB libspdk_bdev_raid.a 00:02:43.995 SO libspdk_bdev_raid.so.6.0 00:02:43.995 SYMLINK libspdk_bdev_raid.so 00:02:45.377 LIB libspdk_bdev_nvme.a 00:02:45.377 SO libspdk_bdev_nvme.so.7.1 00:02:45.377 SYMLINK libspdk_bdev_nvme.so 00:02:45.947 CC module/event/subsystems/vmd/vmd.o 00:02:45.947 CC module/event/subsystems/iobuf/iobuf.o 00:02:45.947 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:45.947 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:45.947 CC module/event/subsystems/sock/sock.o 00:02:45.947 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:45.947 CC module/event/subsystems/fsdev/fsdev.o 00:02:45.947 CC module/event/subsystems/keyring/keyring.o 00:02:45.947 CC module/event/subsystems/scheduler/scheduler.o 00:02:46.206 LIB libspdk_event_fsdev.a 00:02:46.206 LIB libspdk_event_sock.a 00:02:46.206 LIB libspdk_event_scheduler.a 00:02:46.206 LIB libspdk_event_vmd.a 00:02:46.206 LIB libspdk_event_keyring.a 00:02:46.206 LIB libspdk_event_vhost_blk.a 00:02:46.206 LIB libspdk_event_iobuf.a 00:02:46.206 SO libspdk_event_fsdev.so.1.0 00:02:46.206 SO libspdk_event_sock.so.5.0 00:02:46.206 SO libspdk_event_scheduler.so.4.0 00:02:46.206 SO libspdk_event_vmd.so.6.0 00:02:46.207 SO libspdk_event_keyring.so.1.0 00:02:46.207 SO libspdk_event_iobuf.so.3.0 00:02:46.207 SO libspdk_event_vhost_blk.so.3.0 00:02:46.207 SYMLINK libspdk_event_fsdev.so 00:02:46.207 SYMLINK libspdk_event_sock.so 00:02:46.207 SYMLINK libspdk_event_keyring.so 00:02:46.207 SYMLINK libspdk_event_vmd.so 00:02:46.207 SYMLINK libspdk_event_scheduler.so 00:02:46.207 SYMLINK libspdk_event_vhost_blk.so 00:02:46.207 SYMLINK libspdk_event_iobuf.so 00:02:46.466 CC module/event/subsystems/accel/accel.o 00:02:46.725 LIB libspdk_event_accel.a 00:02:46.725 SO libspdk_event_accel.so.6.0 00:02:46.725 SYMLINK libspdk_event_accel.so 00:02:46.985 CC module/event/subsystems/bdev/bdev.o 00:02:47.245 LIB libspdk_event_bdev.a 00:02:47.245 SO libspdk_event_bdev.so.6.0 00:02:47.245 SYMLINK libspdk_event_bdev.so 00:02:47.505 CC module/event/subsystems/ublk/ublk.o 00:02:47.765 CC module/event/subsystems/nbd/nbd.o 00:02:47.765 CC module/event/subsystems/scsi/scsi.o 00:02:47.765 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:47.765 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:47.765 LIB libspdk_event_nbd.a 00:02:47.765 LIB libspdk_event_ublk.a 00:02:47.765 LIB libspdk_event_scsi.a 00:02:47.765 SO libspdk_event_nbd.so.6.0 00:02:47.765 SO libspdk_event_ublk.so.3.0 00:02:47.765 SO libspdk_event_scsi.so.6.0 00:02:47.765 LIB libspdk_event_nvmf.a 00:02:47.765 SYMLINK libspdk_event_nbd.so 00:02:47.765 SYMLINK libspdk_event_ublk.so 00:02:47.765 SYMLINK libspdk_event_scsi.so 00:02:47.765 SO libspdk_event_nvmf.so.6.0 00:02:48.024 SYMLINK libspdk_event_nvmf.so 00:02:48.284 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:48.284 CC module/event/subsystems/iscsi/iscsi.o 00:02:48.284 LIB libspdk_event_vhost_scsi.a 00:02:48.284 LIB libspdk_event_iscsi.a 00:02:48.284 SO libspdk_event_vhost_scsi.so.3.0 00:02:48.284 SO libspdk_event_iscsi.so.6.0 00:02:48.544 SYMLINK libspdk_event_vhost_scsi.so 00:02:48.544 SYMLINK libspdk_event_iscsi.so 00:02:48.544 SO libspdk.so.6.0 00:02:48.544 SYMLINK libspdk.so 00:02:49.121 CC app/spdk_nvme_discover/discovery_aer.o 00:02:49.121 CC app/spdk_nvme_perf/perf.o 00:02:49.121 CC app/trace_record/trace_record.o 00:02:49.121 CC app/spdk_lspci/spdk_lspci.o 00:02:49.121 CC test/rpc_client/rpc_client_test.o 00:02:49.121 TEST_HEADER include/spdk/accel.h 00:02:49.121 TEST_HEADER include/spdk/accel_module.h 00:02:49.121 TEST_HEADER include/spdk/assert.h 00:02:49.121 TEST_HEADER include/spdk/barrier.h 00:02:49.121 TEST_HEADER include/spdk/bdev.h 00:02:49.121 TEST_HEADER include/spdk/base64.h 00:02:49.121 TEST_HEADER include/spdk/bdev_zone.h 00:02:49.121 TEST_HEADER include/spdk/bdev_module.h 00:02:49.121 CXX app/trace/trace.o 00:02:49.121 CC app/spdk_nvme_identify/identify.o 00:02:49.121 TEST_HEADER include/spdk/bit_array.h 00:02:49.121 CC app/spdk_top/spdk_top.o 00:02:49.121 TEST_HEADER include/spdk/bit_pool.h 00:02:49.121 TEST_HEADER include/spdk/blob_bdev.h 00:02:49.121 TEST_HEADER include/spdk/blobfs.h 00:02:49.121 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:49.121 TEST_HEADER include/spdk/conf.h 00:02:49.121 TEST_HEADER include/spdk/blob.h 00:02:49.121 TEST_HEADER include/spdk/config.h 00:02:49.121 TEST_HEADER include/spdk/crc32.h 00:02:49.121 TEST_HEADER include/spdk/cpuset.h 00:02:49.121 TEST_HEADER include/spdk/crc64.h 00:02:49.121 TEST_HEADER include/spdk/crc16.h 00:02:49.121 TEST_HEADER include/spdk/dif.h 00:02:49.121 TEST_HEADER include/spdk/dma.h 00:02:49.121 TEST_HEADER include/spdk/endian.h 00:02:49.121 TEST_HEADER include/spdk/event.h 00:02:49.121 TEST_HEADER include/spdk/env.h 00:02:49.121 TEST_HEADER include/spdk/env_dpdk.h 00:02:49.121 TEST_HEADER include/spdk/fd_group.h 00:02:49.121 TEST_HEADER include/spdk/fsdev.h 00:02:49.121 TEST_HEADER include/spdk/file.h 00:02:49.121 TEST_HEADER include/spdk/fd.h 00:02:49.121 TEST_HEADER include/spdk/fsdev_module.h 00:02:49.121 TEST_HEADER include/spdk/ftl.h 00:02:49.121 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:49.121 TEST_HEADER include/spdk/hexlify.h 00:02:49.121 TEST_HEADER include/spdk/gpt_spec.h 00:02:49.121 TEST_HEADER include/spdk/histogram_data.h 00:02:49.121 TEST_HEADER include/spdk/idxd.h 00:02:49.121 TEST_HEADER include/spdk/idxd_spec.h 00:02:49.121 TEST_HEADER include/spdk/ioat.h 00:02:49.121 TEST_HEADER include/spdk/ioat_spec.h 00:02:49.121 TEST_HEADER include/spdk/init.h 00:02:49.121 TEST_HEADER include/spdk/iscsi_spec.h 00:02:49.121 TEST_HEADER include/spdk/json.h 00:02:49.121 TEST_HEADER include/spdk/jsonrpc.h 00:02:49.121 TEST_HEADER include/spdk/keyring.h 00:02:49.121 TEST_HEADER include/spdk/likely.h 00:02:49.121 TEST_HEADER include/spdk/keyring_module.h 00:02:49.121 TEST_HEADER include/spdk/lvol.h 00:02:49.121 CC app/nvmf_tgt/nvmf_main.o 00:02:49.121 TEST_HEADER include/spdk/log.h 00:02:49.121 TEST_HEADER include/spdk/md5.h 00:02:49.121 TEST_HEADER include/spdk/nbd.h 00:02:49.121 TEST_HEADER include/spdk/memory.h 00:02:49.121 TEST_HEADER include/spdk/mmio.h 00:02:49.121 TEST_HEADER include/spdk/net.h 00:02:49.121 TEST_HEADER include/spdk/nvme.h 00:02:49.122 TEST_HEADER include/spdk/notify.h 00:02:49.122 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:49.122 TEST_HEADER include/spdk/nvme_intel.h 00:02:49.122 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:49.122 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:49.122 TEST_HEADER include/spdk/nvme_spec.h 00:02:49.122 TEST_HEADER include/spdk/nvme_zns.h 00:02:49.122 CC app/iscsi_tgt/iscsi_tgt.o 00:02:49.122 TEST_HEADER include/spdk/nvmf.h 00:02:49.122 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:49.122 CC app/spdk_tgt/spdk_tgt.o 00:02:49.122 TEST_HEADER include/spdk/nvmf_spec.h 00:02:49.122 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:49.122 TEST_HEADER include/spdk/nvmf_transport.h 00:02:49.122 CC app/spdk_dd/spdk_dd.o 00:02:49.122 TEST_HEADER include/spdk/pci_ids.h 00:02:49.122 TEST_HEADER include/spdk/opal.h 00:02:49.122 TEST_HEADER include/spdk/queue.h 00:02:49.122 TEST_HEADER include/spdk/pipe.h 00:02:49.122 TEST_HEADER include/spdk/reduce.h 00:02:49.122 TEST_HEADER include/spdk/opal_spec.h 00:02:49.122 TEST_HEADER include/spdk/rpc.h 00:02:49.122 TEST_HEADER include/spdk/scheduler.h 00:02:49.122 TEST_HEADER include/spdk/scsi_spec.h 00:02:49.122 TEST_HEADER include/spdk/scsi.h 00:02:49.122 TEST_HEADER include/spdk/sock.h 00:02:49.122 TEST_HEADER include/spdk/stdinc.h 00:02:49.122 TEST_HEADER include/spdk/string.h 00:02:49.122 TEST_HEADER include/spdk/thread.h 00:02:49.122 TEST_HEADER include/spdk/trace_parser.h 00:02:49.122 TEST_HEADER include/spdk/tree.h 00:02:49.122 TEST_HEADER include/spdk/ublk.h 00:02:49.122 TEST_HEADER include/spdk/trace.h 00:02:49.122 TEST_HEADER include/spdk/util.h 00:02:49.122 TEST_HEADER include/spdk/uuid.h 00:02:49.122 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:49.122 TEST_HEADER include/spdk/version.h 00:02:49.122 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:49.122 TEST_HEADER include/spdk/vmd.h 00:02:49.122 TEST_HEADER include/spdk/xor.h 00:02:49.122 TEST_HEADER include/spdk/vhost.h 00:02:49.122 TEST_HEADER include/spdk/zipf.h 00:02:49.122 CXX test/cpp_headers/accel.o 00:02:49.122 CXX test/cpp_headers/assert.o 00:02:49.122 CXX test/cpp_headers/barrier.o 00:02:49.122 CXX test/cpp_headers/accel_module.o 00:02:49.122 CXX test/cpp_headers/base64.o 00:02:49.122 CXX test/cpp_headers/bdev.o 00:02:49.122 CXX test/cpp_headers/bdev_module.o 00:02:49.122 CXX test/cpp_headers/bit_pool.o 00:02:49.122 CXX test/cpp_headers/bdev_zone.o 00:02:49.122 CXX test/cpp_headers/bit_array.o 00:02:49.122 CXX test/cpp_headers/blob_bdev.o 00:02:49.122 CXX test/cpp_headers/blobfs_bdev.o 00:02:49.122 CXX test/cpp_headers/blobfs.o 00:02:49.122 CXX test/cpp_headers/conf.o 00:02:49.122 CXX test/cpp_headers/blob.o 00:02:49.122 CXX test/cpp_headers/config.o 00:02:49.122 CXX test/cpp_headers/crc16.o 00:02:49.122 CXX test/cpp_headers/cpuset.o 00:02:49.122 CXX test/cpp_headers/crc64.o 00:02:49.122 CXX test/cpp_headers/dma.o 00:02:49.122 CXX test/cpp_headers/crc32.o 00:02:49.122 CXX test/cpp_headers/endian.o 00:02:49.122 CXX test/cpp_headers/dif.o 00:02:49.122 CXX test/cpp_headers/env_dpdk.o 00:02:49.122 CXX test/cpp_headers/env.o 00:02:49.122 CXX test/cpp_headers/fd.o 00:02:49.122 CXX test/cpp_headers/fd_group.o 00:02:49.122 CXX test/cpp_headers/event.o 00:02:49.122 CXX test/cpp_headers/fsdev.o 00:02:49.122 CXX test/cpp_headers/file.o 00:02:49.122 CXX test/cpp_headers/fsdev_module.o 00:02:49.122 CXX test/cpp_headers/ftl.o 00:02:49.122 CXX test/cpp_headers/fuse_dispatcher.o 00:02:49.122 CXX test/cpp_headers/gpt_spec.o 00:02:49.122 CXX test/cpp_headers/hexlify.o 00:02:49.122 CXX test/cpp_headers/histogram_data.o 00:02:49.122 CXX test/cpp_headers/idxd.o 00:02:49.122 CXX test/cpp_headers/idxd_spec.o 00:02:49.122 CXX test/cpp_headers/init.o 00:02:49.122 CXX test/cpp_headers/ioat_spec.o 00:02:49.122 CXX test/cpp_headers/ioat.o 00:02:49.122 CXX test/cpp_headers/iscsi_spec.o 00:02:49.122 CXX test/cpp_headers/json.o 00:02:49.122 CXX test/cpp_headers/jsonrpc.o 00:02:49.122 CXX test/cpp_headers/keyring.o 00:02:49.122 CXX test/cpp_headers/keyring_module.o 00:02:49.122 CXX test/cpp_headers/likely.o 00:02:49.122 CXX test/cpp_headers/log.o 00:02:49.122 CXX test/cpp_headers/md5.o 00:02:49.122 CXX test/cpp_headers/lvol.o 00:02:49.122 CXX test/cpp_headers/memory.o 00:02:49.122 CXX test/cpp_headers/nbd.o 00:02:49.122 CXX test/cpp_headers/mmio.o 00:02:49.122 CXX test/cpp_headers/net.o 00:02:49.122 CXX test/cpp_headers/notify.o 00:02:49.122 CXX test/cpp_headers/nvme.o 00:02:49.122 CXX test/cpp_headers/nvme_intel.o 00:02:49.122 CXX test/cpp_headers/nvme_ocssd.o 00:02:49.122 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:49.122 CXX test/cpp_headers/nvme_spec.o 00:02:49.122 CXX test/cpp_headers/nvme_zns.o 00:02:49.122 CXX test/cpp_headers/nvmf_cmd.o 00:02:49.122 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:49.122 CXX test/cpp_headers/nvmf.o 00:02:49.122 CXX test/cpp_headers/nvmf_spec.o 00:02:49.122 CXX test/cpp_headers/nvmf_transport.o 00:02:49.122 CC test/thread/poller_perf/poller_perf.o 00:02:49.122 CC test/app/jsoncat/jsoncat.o 00:02:49.122 CXX test/cpp_headers/opal.o 00:02:49.122 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:49.122 CC examples/util/zipf/zipf.o 00:02:49.122 CC test/env/pci/pci_ut.o 00:02:49.122 CC test/app/histogram_perf/histogram_perf.o 00:02:49.122 CC test/env/vtophys/vtophys.o 00:02:49.122 CC test/app/stub/stub.o 00:02:49.122 CXX test/cpp_headers/opal_spec.o 00:02:49.122 CC test/app/bdev_svc/bdev_svc.o 00:02:49.122 CC app/fio/nvme/fio_plugin.o 00:02:49.122 CC test/env/memory/memory_ut.o 00:02:49.122 CC examples/ioat/verify/verify.o 00:02:49.122 CC test/dma/test_dma/test_dma.o 00:02:49.122 CC examples/ioat/perf/perf.o 00:02:49.396 CC app/fio/bdev/fio_plugin.o 00:02:49.396 LINK spdk_lspci 00:02:49.396 LINK rpc_client_test 00:02:49.396 LINK interrupt_tgt 00:02:49.666 LINK spdk_nvme_discover 00:02:49.666 CC test/env/mem_callbacks/mem_callbacks.o 00:02:49.666 LINK spdk_tgt 00:02:49.666 LINK nvmf_tgt 00:02:49.666 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:49.666 LINK poller_perf 00:02:49.666 LINK iscsi_tgt 00:02:49.666 LINK histogram_perf 00:02:49.666 LINK vtophys 00:02:49.666 LINK zipf 00:02:49.666 CXX test/cpp_headers/pci_ids.o 00:02:49.666 LINK env_dpdk_post_init 00:02:49.666 LINK jsoncat 00:02:49.666 CXX test/cpp_headers/pipe.o 00:02:49.666 LINK bdev_svc 00:02:49.666 CXX test/cpp_headers/queue.o 00:02:49.666 CXX test/cpp_headers/reduce.o 00:02:49.666 CXX test/cpp_headers/rpc.o 00:02:49.666 CXX test/cpp_headers/scheduler.o 00:02:49.666 CXX test/cpp_headers/scsi.o 00:02:49.666 CXX test/cpp_headers/scsi_spec.o 00:02:49.931 CXX test/cpp_headers/sock.o 00:02:49.931 CXX test/cpp_headers/stdinc.o 00:02:49.931 CXX test/cpp_headers/thread.o 00:02:49.931 CXX test/cpp_headers/string.o 00:02:49.931 CXX test/cpp_headers/trace.o 00:02:49.931 LINK spdk_trace_record 00:02:49.931 CXX test/cpp_headers/trace_parser.o 00:02:49.931 CXX test/cpp_headers/tree.o 00:02:49.931 CXX test/cpp_headers/ublk.o 00:02:49.931 CXX test/cpp_headers/util.o 00:02:49.931 CXX test/cpp_headers/uuid.o 00:02:49.931 LINK stub 00:02:49.931 CXX test/cpp_headers/version.o 00:02:49.931 CXX test/cpp_headers/vfio_user_pci.o 00:02:49.931 CXX test/cpp_headers/vfio_user_spec.o 00:02:49.931 CXX test/cpp_headers/vhost.o 00:02:49.931 CXX test/cpp_headers/vmd.o 00:02:49.931 CXX test/cpp_headers/xor.o 00:02:49.931 CXX test/cpp_headers/zipf.o 00:02:49.931 LINK verify 00:02:49.931 LINK ioat_perf 00:02:49.931 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:49.931 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:49.931 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:49.931 LINK spdk_dd 00:02:49.931 LINK spdk_trace 00:02:50.190 LINK pci_ut 00:02:50.190 LINK test_dma 00:02:50.190 CC test/event/event_perf/event_perf.o 00:02:50.190 CC test/event/reactor/reactor.o 00:02:50.190 CC test/event/reactor_perf/reactor_perf.o 00:02:50.190 CC examples/sock/hello_world/hello_sock.o 00:02:50.190 CC test/event/app_repeat/app_repeat.o 00:02:50.190 CC examples/idxd/perf/perf.o 00:02:50.450 CC examples/vmd/lsvmd/lsvmd.o 00:02:50.450 CC examples/vmd/led/led.o 00:02:50.450 CC test/event/scheduler/scheduler.o 00:02:50.450 LINK nvme_fuzz 00:02:50.450 CC examples/thread/thread/thread_ex.o 00:02:50.450 LINK mem_callbacks 00:02:50.450 CC app/vhost/vhost.o 00:02:50.450 LINK spdk_bdev 00:02:50.450 LINK vhost_fuzz 00:02:50.450 LINK event_perf 00:02:50.450 LINK reactor 00:02:50.450 LINK reactor_perf 00:02:50.450 LINK app_repeat 00:02:50.450 LINK lsvmd 00:02:50.450 LINK spdk_nvme_identify 00:02:50.450 LINK spdk_nvme 00:02:50.450 LINK spdk_nvme_perf 00:02:50.450 LINK led 00:02:50.711 LINK hello_sock 00:02:50.711 LINK scheduler 00:02:50.711 LINK vhost 00:02:50.711 LINK thread 00:02:50.711 LINK spdk_top 00:02:50.711 CC test/nvme/reset/reset.o 00:02:50.711 CC test/nvme/cuse/cuse.o 00:02:50.711 CC test/nvme/sgl/sgl.o 00:02:50.711 CC test/nvme/e2edp/nvme_dp.o 00:02:50.711 LINK idxd_perf 00:02:50.711 CC test/nvme/startup/startup.o 00:02:50.711 CC test/nvme/connect_stress/connect_stress.o 00:02:50.711 CC test/nvme/err_injection/err_injection.o 00:02:50.711 CC test/nvme/boot_partition/boot_partition.o 00:02:50.711 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:50.711 CC test/nvme/compliance/nvme_compliance.o 00:02:50.711 CC test/nvme/overhead/overhead.o 00:02:50.711 CC test/nvme/aer/aer.o 00:02:50.711 CC test/nvme/fdp/fdp.o 00:02:50.711 CC test/nvme/fused_ordering/fused_ordering.o 00:02:50.711 CC test/nvme/reserve/reserve.o 00:02:50.711 CC test/nvme/simple_copy/simple_copy.o 00:02:50.711 CC test/accel/dif/dif.o 00:02:50.711 CC test/blobfs/mkfs/mkfs.o 00:02:50.971 CC test/lvol/esnap/esnap.o 00:02:50.971 LINK memory_ut 00:02:50.971 LINK boot_partition 00:02:50.971 LINK startup 00:02:50.971 LINK connect_stress 00:02:50.971 LINK err_injection 00:02:50.971 LINK doorbell_aers 00:02:50.971 LINK fused_ordering 00:02:50.971 LINK reserve 00:02:50.971 LINK mkfs 00:02:50.971 LINK reset 00:02:50.971 LINK nvme_dp 00:02:50.971 LINK simple_copy 00:02:50.971 LINK sgl 00:02:50.971 CC examples/nvme/hello_world/hello_world.o 00:02:50.971 CC examples/nvme/arbitration/arbitration.o 00:02:50.971 CC examples/nvme/reconnect/reconnect.o 00:02:50.971 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:50.971 LINK aer 00:02:50.971 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:50.971 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:50.971 LINK overhead 00:02:50.971 CC examples/nvme/abort/abort.o 00:02:50.971 CC examples/nvme/hotplug/hotplug.o 00:02:50.971 LINK nvme_compliance 00:02:51.229 LINK fdp 00:02:51.229 CC examples/accel/perf/accel_perf.o 00:02:51.229 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:51.229 CC examples/blob/cli/blobcli.o 00:02:51.229 CC examples/blob/hello_world/hello_blob.o 00:02:51.229 LINK pmr_persistence 00:02:51.229 LINK cmb_copy 00:02:51.229 LINK hello_world 00:02:51.229 LINK hotplug 00:02:51.487 LINK reconnect 00:02:51.487 LINK arbitration 00:02:51.487 LINK abort 00:02:51.487 LINK hello_blob 00:02:51.487 LINK hello_fsdev 00:02:51.487 LINK dif 00:02:51.487 LINK nvme_manage 00:02:51.487 LINK accel_perf 00:02:51.746 LINK blobcli 00:02:51.746 LINK iscsi_fuzz 00:02:52.006 LINK cuse 00:02:52.006 CC test/bdev/bdevio/bdevio.o 00:02:52.006 CC examples/bdev/hello_world/hello_bdev.o 00:02:52.006 CC examples/bdev/bdevperf/bdevperf.o 00:02:52.265 LINK hello_bdev 00:02:52.265 LINK bdevio 00:02:52.834 LINK bdevperf 00:02:53.402 CC examples/nvmf/nvmf/nvmf.o 00:02:53.663 LINK nvmf 00:02:55.569 LINK esnap 00:02:55.830 00:02:55.830 real 0m59.261s 00:02:55.830 user 8m53.362s 00:02:55.830 sys 3m32.933s 00:02:55.830 00:47:02 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:55.830 00:47:02 make -- common/autotest_common.sh@10 -- $ set +x 00:02:55.830 ************************************ 00:02:55.830 END TEST make 00:02:55.830 ************************************ 00:02:55.830 00:47:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:55.830 00:47:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:55.830 00:47:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:55.830 00:47:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.830 00:47:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:55.830 00:47:02 -- pm/common@44 -- $ pid=62392 00:02:55.830 00:47:02 -- pm/common@50 -- $ kill -TERM 62392 00:02:55.830 00:47:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.830 00:47:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:55.830 00:47:02 -- pm/common@44 -- $ pid=62393 00:02:55.830 00:47:02 -- pm/common@50 -- $ kill -TERM 62393 00:02:55.830 00:47:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.830 00:47:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:55.830 00:47:02 -- pm/common@44 -- $ pid=62395 00:02:55.830 00:47:02 -- pm/common@50 -- $ kill -TERM 62395 00:02:55.830 00:47:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.830 00:47:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:55.830 00:47:02 -- pm/common@44 -- $ pid=62418 00:02:55.830 00:47:02 -- pm/common@50 -- $ sudo -E kill -TERM 62418 00:02:55.830 00:47:02 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:55.830 00:47:02 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:02:56.090 00:47:02 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:56.090 00:47:02 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:56.090 00:47:02 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:56.090 00:47:02 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:56.090 00:47:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:56.090 00:47:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:56.091 00:47:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:56.091 00:47:02 -- scripts/common.sh@336 -- # IFS=.-: 00:02:56.091 00:47:02 -- scripts/common.sh@336 -- # read -ra ver1 00:02:56.091 00:47:02 -- scripts/common.sh@337 -- # IFS=.-: 00:02:56.091 00:47:02 -- scripts/common.sh@337 -- # read -ra ver2 00:02:56.091 00:47:02 -- scripts/common.sh@338 -- # local 'op=<' 00:02:56.091 00:47:02 -- scripts/common.sh@340 -- # ver1_l=2 00:02:56.091 00:47:02 -- scripts/common.sh@341 -- # ver2_l=1 00:02:56.091 00:47:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:56.091 00:47:02 -- scripts/common.sh@344 -- # case "$op" in 00:02:56.091 00:47:02 -- scripts/common.sh@345 -- # : 1 00:02:56.091 00:47:02 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:56.091 00:47:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:56.091 00:47:02 -- scripts/common.sh@365 -- # decimal 1 00:02:56.091 00:47:02 -- scripts/common.sh@353 -- # local d=1 00:02:56.091 00:47:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:56.091 00:47:02 -- scripts/common.sh@355 -- # echo 1 00:02:56.091 00:47:02 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:56.091 00:47:02 -- scripts/common.sh@366 -- # decimal 2 00:02:56.091 00:47:02 -- scripts/common.sh@353 -- # local d=2 00:02:56.091 00:47:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:56.091 00:47:02 -- scripts/common.sh@355 -- # echo 2 00:02:56.091 00:47:02 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:56.091 00:47:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:56.091 00:47:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:56.091 00:47:02 -- scripts/common.sh@368 -- # return 0 00:02:56.091 00:47:02 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:56.091 00:47:02 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:56.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.091 --rc genhtml_branch_coverage=1 00:02:56.091 --rc genhtml_function_coverage=1 00:02:56.091 --rc genhtml_legend=1 00:02:56.091 --rc geninfo_all_blocks=1 00:02:56.091 --rc geninfo_unexecuted_blocks=1 00:02:56.091 00:02:56.091 ' 00:02:56.091 00:47:02 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:56.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.091 --rc genhtml_branch_coverage=1 00:02:56.091 --rc genhtml_function_coverage=1 00:02:56.091 --rc genhtml_legend=1 00:02:56.091 --rc geninfo_all_blocks=1 00:02:56.091 --rc geninfo_unexecuted_blocks=1 00:02:56.091 00:02:56.091 ' 00:02:56.091 00:47:02 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:56.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.091 --rc genhtml_branch_coverage=1 00:02:56.091 --rc genhtml_function_coverage=1 00:02:56.091 --rc genhtml_legend=1 00:02:56.091 --rc geninfo_all_blocks=1 00:02:56.091 --rc geninfo_unexecuted_blocks=1 00:02:56.091 00:02:56.091 ' 00:02:56.091 00:47:02 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:56.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.091 --rc genhtml_branch_coverage=1 00:02:56.091 --rc genhtml_function_coverage=1 00:02:56.091 --rc genhtml_legend=1 00:02:56.091 --rc geninfo_all_blocks=1 00:02:56.091 --rc geninfo_unexecuted_blocks=1 00:02:56.091 00:02:56.091 ' 00:02:56.091 00:47:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:02:56.091 00:47:02 -- nvmf/common.sh@7 -- # uname -s 00:02:56.091 00:47:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:56.091 00:47:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:56.091 00:47:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:56.091 00:47:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:56.091 00:47:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:56.091 00:47:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:56.091 00:47:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:56.091 00:47:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:56.091 00:47:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:56.091 00:47:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:56.091 00:47:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:02:56.091 00:47:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:02:56.091 00:47:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:56.091 00:47:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:56.091 00:47:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:56.091 00:47:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:56.091 00:47:02 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:02:56.091 00:47:02 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:56.091 00:47:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:56.091 00:47:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:56.091 00:47:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:56.091 00:47:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.091 00:47:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.091 00:47:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.091 00:47:02 -- paths/export.sh@5 -- # export PATH 00:02:56.091 00:47:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.091 00:47:02 -- nvmf/common.sh@51 -- # : 0 00:02:56.091 00:47:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:56.091 00:47:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:56.091 00:47:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:56.091 00:47:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:56.091 00:47:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:56.091 00:47:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:56.091 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:56.091 00:47:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:56.091 00:47:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:56.091 00:47:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:56.091 00:47:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:56.091 00:47:02 -- spdk/autotest.sh@32 -- # uname -s 00:02:56.091 00:47:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:56.091 00:47:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:56.091 00:47:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/coredumps 00:02:56.091 00:47:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:56.091 00:47:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/coredumps 00:02:56.091 00:47:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:56.091 00:47:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:56.091 00:47:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:56.091 00:47:02 -- spdk/autotest.sh@48 -- # udevadm_pid=125434 00:02:56.091 00:47:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:56.091 00:47:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:56.091 00:47:02 -- pm/common@17 -- # local monitor 00:02:56.091 00:47:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.091 00:47:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.091 00:47:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.091 00:47:02 -- pm/common@21 -- # date +%s 00:02:56.091 00:47:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.091 00:47:02 -- pm/common@21 -- # date +%s 00:02:56.091 00:47:02 -- pm/common@25 -- # sleep 1 00:02:56.091 00:47:02 -- pm/common@21 -- # date +%s 00:02:56.091 00:47:02 -- pm/common@21 -- # date +%s 00:02:56.091 00:47:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731973622 00:02:56.352 00:47:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731973622 00:02:56.352 00:47:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731973622 00:02:56.352 00:47:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731973622 00:02:56.352 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731973622_collect-vmstat.pm.log 00:02:56.352 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731973622_collect-cpu-load.pm.log 00:02:56.352 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731973622_collect-cpu-temp.pm.log 00:02:56.352 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731973622_collect-bmc-pm.bmc.pm.log 00:02:57.292 00:47:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:57.292 00:47:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:57.292 00:47:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:57.292 00:47:03 -- common/autotest_common.sh@10 -- # set +x 00:02:57.292 00:47:03 -- spdk/autotest.sh@59 -- # create_test_list 00:02:57.292 00:47:03 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:57.292 00:47:03 -- common/autotest_common.sh@10 -- # set +x 00:02:57.292 00:47:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/autotest.sh 00:02:57.292 00:47:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:57.292 00:47:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:57.292 00:47:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:02:57.292 00:47:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:57.292 00:47:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:57.292 00:47:03 -- common/autotest_common.sh@1457 -- # uname 00:02:57.292 00:47:03 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:57.292 00:47:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:57.292 00:47:03 -- common/autotest_common.sh@1477 -- # uname 00:02:57.292 00:47:03 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:57.292 00:47:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:57.292 00:47:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:57.292 lcov: LCOV version 1.15 00:02:57.292 00:47:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_base.info 00:03:15.396 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:15.396 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:21.973 00:47:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:21.973 00:47:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:21.973 00:47:27 -- common/autotest_common.sh@10 -- # set +x 00:03:21.973 00:47:27 -- spdk/autotest.sh@78 -- # rm -f 00:03:21.973 00:47:27 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.513 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:03:24.513 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:24.513 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:24.513 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:24.513 00:47:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:24.513 00:47:31 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:24.513 00:47:31 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:24.513 00:47:31 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:24.513 00:47:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:24.513 00:47:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:24.513 00:47:31 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:24.513 00:47:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:24.513 00:47:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:24.513 00:47:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:24.513 00:47:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:24.514 00:47:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:24.514 00:47:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:24.514 00:47:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:24.514 00:47:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:24.514 00:47:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:24.514 00:47:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:24.514 00:47:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:24.514 00:47:31 -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:03:24.514 00:47:31 -- common/autotest_common.sh@1662 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:03:24.514 00:47:31 -- spdk/autotest.sh@85 -- # (( 1 > 0 )) 00:03:24.514 00:47:31 -- spdk/autotest.sh@90 -- # export PCI_BLOCKED=0000:5f:00.0 00:03:24.514 00:47:31 -- spdk/autotest.sh@90 -- # PCI_BLOCKED=0000:5f:00.0 00:03:24.514 00:47:31 -- spdk/autotest.sh@91 -- # export PCI_ZONED=0000:5f:00.0 00:03:24.514 00:47:31 -- spdk/autotest.sh@91 -- # PCI_ZONED=0000:5f:00.0 00:03:24.514 00:47:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:24.514 00:47:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:24.514 00:47:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:24.514 00:47:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:24.514 00:47:31 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:24.514 No valid GPT data, bailing 00:03:24.514 00:47:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:24.514 00:47:31 -- scripts/common.sh@394 -- # pt= 00:03:24.514 00:47:31 -- scripts/common.sh@395 -- # return 1 00:03:24.514 00:47:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:24.514 1+0 records in 00:03:24.514 1+0 records out 00:03:24.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0013301 s, 788 MB/s 00:03:24.514 00:47:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:24.514 00:47:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:24.514 00:47:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:24.514 00:47:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:24.514 00:47:31 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:24.774 No valid GPT data, bailing 00:03:24.774 00:47:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:24.774 00:47:31 -- scripts/common.sh@394 -- # pt= 00:03:24.774 00:47:31 -- scripts/common.sh@395 -- # return 1 00:03:24.774 00:47:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:24.774 1+0 records in 00:03:24.774 1+0 records out 00:03:24.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510928 s, 205 MB/s 00:03:24.774 00:47:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:24.774 00:47:31 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:03:24.774 00:47:31 -- spdk/autotest.sh@99 -- # continue 00:03:24.774 00:47:31 -- spdk/autotest.sh@105 -- # sync 00:03:24.774 00:47:31 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:24.774 00:47:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:24.774 00:47:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:31.366 00:47:37 -- spdk/autotest.sh@111 -- # uname -s 00:03:31.366 00:47:37 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:31.366 00:47:37 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:31.366 00:47:37 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:03:33.907 Hugepages 00:03:33.907 node hugesize free / total 00:03:33.907 node0 1048576kB 0 / 0 00:03:33.907 node0 2048kB 0 / 0 00:03:33.907 node1 1048576kB 0 / 0 00:03:33.907 node1 2048kB 0 / 0 00:03:33.907 00:03:33.907 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:33.907 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:33.907 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:33.907 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:33.907 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:33.907 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:33.907 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:33.907 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:33.907 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:33.907 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:33.907 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:03:33.907 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:33.907 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:33.907 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:33.907 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:33.907 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:33.907 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:33.907 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:33.907 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:33.907 00:47:40 -- spdk/autotest.sh@117 -- # uname -s 00:03:33.907 00:47:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:33.907 00:47:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:33.907 00:47:40 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:36.446 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:36.705 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.705 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.705 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.705 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.705 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.705 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.705 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.705 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:36.705 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.705 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.705 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.964 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.964 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.964 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.964 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.964 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.902 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.902 00:47:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:38.840 00:47:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:38.840 00:47:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:38.840 00:47:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:38.840 00:47:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:38.840 00:47:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:38.840 00:47:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:38.840 00:47:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:38.840 00:47:45 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:38.840 00:47:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:38.840 00:47:45 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:38.840 00:47:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:38.840 00:47:45 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.377 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:41.946 Waiting for block devices as requested 00:03:41.947 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:41.947 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:41.947 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:42.206 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:42.206 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:42.206 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:42.466 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:42.466 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:42.466 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:42.733 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:42.733 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:42.733 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:42.733 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:42.992 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:42.992 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:42.992 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:43.251 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:43.251 00:47:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:43.251 00:47:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:43.251 00:47:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:43.251 00:47:49 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:43.251 00:47:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:43.251 00:47:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:43.251 00:47:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:43.251 00:47:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:43.251 00:47:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:43.251 00:47:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:43.251 00:47:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:43.251 00:47:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:43.251 00:47:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:43.251 00:47:49 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:43.251 00:47:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:43.251 00:47:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:43.251 00:47:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:43.251 00:47:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:43.251 00:47:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:43.251 00:47:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:43.251 00:47:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:43.251 00:47:49 -- common/autotest_common.sh@1543 -- # continue 00:03:43.251 00:47:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:43.251 00:47:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:43.251 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:03:43.251 00:47:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:43.251 00:47:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.251 00:47:49 -- common/autotest_common.sh@10 -- # set +x 00:03:43.251 00:47:49 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:45.790 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:46.360 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:46.360 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:47.299 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.299 00:47:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:47.299 00:47:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.299 00:47:53 -- common/autotest_common.sh@10 -- # set +x 00:03:47.299 00:47:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:47.299 00:47:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:47.299 00:47:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:47.299 00:47:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:47.299 00:47:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:47.299 00:47:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:47.299 00:47:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:47.299 00:47:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:47.299 00:47:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:47.299 00:47:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:47.299 00:47:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.299 00:47:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:47.299 00:47:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:47.559 00:47:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:47.559 00:47:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:47.559 00:47:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:47.559 00:47:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:47.559 00:47:54 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:47.559 00:47:54 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:47.559 00:47:54 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:47.559 00:47:54 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:47.559 00:47:54 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:47.559 00:47:54 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:47.559 00:47:54 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=139940 00:03:47.559 00:47:54 -- common/autotest_common.sh@1585 -- # waitforlisten 139940 00:03:47.559 00:47:54 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.559 00:47:54 -- common/autotest_common.sh@835 -- # '[' -z 139940 ']' 00:03:47.559 00:47:54 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.559 00:47:54 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:47.559 00:47:54 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.559 00:47:54 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:47.559 00:47:54 -- common/autotest_common.sh@10 -- # set +x 00:03:47.559 [2024-11-19 00:47:54.174711] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:03:47.559 [2024-11-19 00:47:54.174801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139940 ] 00:03:47.818 [2024-11-19 00:47:54.299956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.818 [2024-11-19 00:47:54.409819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.753 00:47:55 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:48.753 00:47:55 -- common/autotest_common.sh@868 -- # return 0 00:03:48.753 00:47:55 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:48.753 00:47:55 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:48.753 00:47:55 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:52.038 nvme0n1 00:03:52.038 00:47:58 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:52.038 [2024-11-19 00:47:58.451023] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:52.038 [2024-11-19 00:47:58.451068] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:52.038 request: 00:03:52.038 { 00:03:52.038 "nvme_ctrlr_name": "nvme0", 00:03:52.038 "password": "test", 00:03:52.038 "method": "bdev_nvme_opal_revert", 00:03:52.038 "req_id": 1 00:03:52.038 } 00:03:52.038 Got JSON-RPC error response 00:03:52.038 response: 00:03:52.038 { 00:03:52.038 "code": -32603, 00:03:52.038 "message": "Internal error" 00:03:52.038 } 00:03:52.038 00:47:58 -- common/autotest_common.sh@1591 -- # true 00:03:52.038 00:47:58 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:52.038 00:47:58 -- common/autotest_common.sh@1595 -- # killprocess 139940 00:03:52.038 00:47:58 -- common/autotest_common.sh@954 -- # '[' -z 139940 ']' 00:03:52.038 00:47:58 -- common/autotest_common.sh@958 -- # kill -0 139940 00:03:52.038 00:47:58 -- common/autotest_common.sh@959 -- # uname 00:03:52.038 00:47:58 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:52.038 00:47:58 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 139940 00:03:52.038 00:47:58 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:52.038 00:47:58 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:52.038 00:47:58 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 139940' 00:03:52.038 killing process with pid 139940 00:03:52.038 00:47:58 -- common/autotest_common.sh@973 -- # kill 139940 00:03:52.038 00:47:58 -- common/autotest_common.sh@978 -- # wait 139940 00:03:56.228 00:48:02 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:56.228 00:48:02 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:56.228 00:48:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:56.228 00:48:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:56.228 00:48:02 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:56.228 00:48:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.228 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:03:56.228 00:48:02 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:56.228 00:48:02 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env.sh 00:03:56.228 00:48:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.228 00:48:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.228 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:03:56.228 ************************************ 00:03:56.228 START TEST env 00:03:56.228 ************************************ 00:03:56.228 00:48:02 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env.sh 00:03:56.228 * Looking for test storage... 00:03:56.228 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env 00:03:56.228 00:48:02 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:56.228 00:48:02 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:56.228 00:48:02 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:56.228 00:48:02 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:56.228 00:48:02 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.228 00:48:02 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.228 00:48:02 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.228 00:48:02 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.228 00:48:02 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.228 00:48:02 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.228 00:48:02 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.228 00:48:02 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.228 00:48:02 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.228 00:48:02 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.228 00:48:02 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.228 00:48:02 env -- scripts/common.sh@344 -- # case "$op" in 00:03:56.228 00:48:02 env -- scripts/common.sh@345 -- # : 1 00:03:56.228 00:48:02 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.228 00:48:02 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.228 00:48:02 env -- scripts/common.sh@365 -- # decimal 1 00:03:56.228 00:48:02 env -- scripts/common.sh@353 -- # local d=1 00:03:56.228 00:48:02 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.228 00:48:02 env -- scripts/common.sh@355 -- # echo 1 00:03:56.228 00:48:02 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.228 00:48:02 env -- scripts/common.sh@366 -- # decimal 2 00:03:56.228 00:48:02 env -- scripts/common.sh@353 -- # local d=2 00:03:56.228 00:48:02 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.228 00:48:02 env -- scripts/common.sh@355 -- # echo 2 00:03:56.229 00:48:02 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.229 00:48:02 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.229 00:48:02 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.229 00:48:02 env -- scripts/common.sh@368 -- # return 0 00:03:56.229 00:48:02 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.229 00:48:02 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:56.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.229 --rc genhtml_branch_coverage=1 00:03:56.229 --rc genhtml_function_coverage=1 00:03:56.229 --rc genhtml_legend=1 00:03:56.229 --rc geninfo_all_blocks=1 00:03:56.229 --rc geninfo_unexecuted_blocks=1 00:03:56.229 00:03:56.229 ' 00:03:56.229 00:48:02 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:56.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.229 --rc genhtml_branch_coverage=1 00:03:56.229 --rc genhtml_function_coverage=1 00:03:56.229 --rc genhtml_legend=1 00:03:56.229 --rc geninfo_all_blocks=1 00:03:56.229 --rc geninfo_unexecuted_blocks=1 00:03:56.229 00:03:56.229 ' 00:03:56.229 00:48:02 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:56.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.229 --rc genhtml_branch_coverage=1 00:03:56.229 --rc genhtml_function_coverage=1 00:03:56.229 --rc genhtml_legend=1 00:03:56.229 --rc geninfo_all_blocks=1 00:03:56.229 --rc geninfo_unexecuted_blocks=1 00:03:56.229 00:03:56.229 ' 00:03:56.229 00:48:02 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:56.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.229 --rc genhtml_branch_coverage=1 00:03:56.229 --rc genhtml_function_coverage=1 00:03:56.229 --rc genhtml_legend=1 00:03:56.229 --rc geninfo_all_blocks=1 00:03:56.229 --rc geninfo_unexecuted_blocks=1 00:03:56.229 00:03:56.229 ' 00:03:56.229 00:48:02 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.229 00:48:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.229 00:48:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.229 00:48:02 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.229 ************************************ 00:03:56.229 START TEST env_memory 00:03:56.229 ************************************ 00:03:56.229 00:48:02 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.229 00:03:56.229 00:03:56.229 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.229 http://cunit.sourceforge.net/ 00:03:56.229 00:03:56.229 00:03:56.229 Suite: memory 00:03:56.229 Test: alloc and free memory map ...[2024-11-19 00:48:02.347385] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:56.229 passed 00:03:56.229 Test: mem map translation ...[2024-11-19 00:48:02.391397] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:56.229 [2024-11-19 00:48:02.391422] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:56.229 [2024-11-19 00:48:02.391469] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:56.229 [2024-11-19 00:48:02.391482] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:56.229 passed 00:03:56.229 Test: mem map registration ...[2024-11-19 00:48:02.456754] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:56.229 [2024-11-19 00:48:02.456777] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:56.229 passed 00:03:56.229 Test: mem map adjacent registrations ...passed 00:03:56.229 00:03:56.229 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.229 suites 1 1 n/a 0 0 00:03:56.229 tests 4 4 4 0 0 00:03:56.229 asserts 152 152 152 0 n/a 00:03:56.229 00:03:56.229 Elapsed time = 0.232 seconds 00:03:56.229 00:03:56.229 real 0m0.259s 00:03:56.229 user 0m0.243s 00:03:56.229 sys 0m0.015s 00:03:56.229 00:48:02 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.229 00:48:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:56.229 ************************************ 00:03:56.229 END TEST env_memory 00:03:56.229 ************************************ 00:03:56.229 00:48:02 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.229 00:48:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.229 00:48:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.229 00:48:02 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.229 ************************************ 00:03:56.229 START TEST env_vtophys 00:03:56.229 ************************************ 00:03:56.229 00:48:02 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.229 EAL: lib.eal log level changed from notice to debug 00:03:56.229 EAL: Detected lcore 0 as core 0 on socket 0 00:03:56.229 EAL: Detected lcore 1 as core 1 on socket 0 00:03:56.229 EAL: Detected lcore 2 as core 2 on socket 0 00:03:56.229 EAL: Detected lcore 3 as core 3 on socket 0 00:03:56.229 EAL: Detected lcore 4 as core 4 on socket 0 00:03:56.229 EAL: Detected lcore 5 as core 5 on socket 0 00:03:56.229 EAL: Detected lcore 6 as core 6 on socket 0 00:03:56.229 EAL: Detected lcore 7 as core 8 on socket 0 00:03:56.229 EAL: Detected lcore 8 as core 9 on socket 0 00:03:56.229 EAL: Detected lcore 9 as core 10 on socket 0 00:03:56.229 EAL: Detected lcore 10 as core 11 on socket 0 00:03:56.229 EAL: Detected lcore 11 as core 12 on socket 0 00:03:56.229 EAL: Detected lcore 12 as core 13 on socket 0 00:03:56.229 EAL: Detected lcore 13 as core 16 on socket 0 00:03:56.229 EAL: Detected lcore 14 as core 17 on socket 0 00:03:56.229 EAL: Detected lcore 15 as core 18 on socket 0 00:03:56.229 EAL: Detected lcore 16 as core 19 on socket 0 00:03:56.229 EAL: Detected lcore 17 as core 20 on socket 0 00:03:56.229 EAL: Detected lcore 18 as core 21 on socket 0 00:03:56.229 EAL: Detected lcore 19 as core 25 on socket 0 00:03:56.229 EAL: Detected lcore 20 as core 26 on socket 0 00:03:56.229 EAL: Detected lcore 21 as core 27 on socket 0 00:03:56.229 EAL: Detected lcore 22 as core 28 on socket 0 00:03:56.229 EAL: Detected lcore 23 as core 29 on socket 0 00:03:56.229 EAL: Detected lcore 24 as core 0 on socket 1 00:03:56.229 EAL: Detected lcore 25 as core 1 on socket 1 00:03:56.229 EAL: Detected lcore 26 as core 2 on socket 1 00:03:56.229 EAL: Detected lcore 27 as core 3 on socket 1 00:03:56.229 EAL: Detected lcore 28 as core 4 on socket 1 00:03:56.229 EAL: Detected lcore 29 as core 5 on socket 1 00:03:56.229 EAL: Detected lcore 30 as core 6 on socket 1 00:03:56.229 EAL: Detected lcore 31 as core 8 on socket 1 00:03:56.229 EAL: Detected lcore 32 as core 9 on socket 1 00:03:56.229 EAL: Detected lcore 33 as core 10 on socket 1 00:03:56.229 EAL: Detected lcore 34 as core 11 on socket 1 00:03:56.229 EAL: Detected lcore 35 as core 12 on socket 1 00:03:56.229 EAL: Detected lcore 36 as core 13 on socket 1 00:03:56.229 EAL: Detected lcore 37 as core 16 on socket 1 00:03:56.229 EAL: Detected lcore 38 as core 17 on socket 1 00:03:56.229 EAL: Detected lcore 39 as core 18 on socket 1 00:03:56.229 EAL: Detected lcore 40 as core 19 on socket 1 00:03:56.229 EAL: Detected lcore 41 as core 20 on socket 1 00:03:56.229 EAL: Detected lcore 42 as core 21 on socket 1 00:03:56.229 EAL: Detected lcore 43 as core 25 on socket 1 00:03:56.229 EAL: Detected lcore 44 as core 26 on socket 1 00:03:56.229 EAL: Detected lcore 45 as core 27 on socket 1 00:03:56.229 EAL: Detected lcore 46 as core 28 on socket 1 00:03:56.229 EAL: Detected lcore 47 as core 29 on socket 1 00:03:56.229 EAL: Detected lcore 48 as core 0 on socket 0 00:03:56.229 EAL: Detected lcore 49 as core 1 on socket 0 00:03:56.229 EAL: Detected lcore 50 as core 2 on socket 0 00:03:56.229 EAL: Detected lcore 51 as core 3 on socket 0 00:03:56.230 EAL: Detected lcore 52 as core 4 on socket 0 00:03:56.230 EAL: Detected lcore 53 as core 5 on socket 0 00:03:56.230 EAL: Detected lcore 54 as core 6 on socket 0 00:03:56.230 EAL: Detected lcore 55 as core 8 on socket 0 00:03:56.230 EAL: Detected lcore 56 as core 9 on socket 0 00:03:56.230 EAL: Detected lcore 57 as core 10 on socket 0 00:03:56.230 EAL: Detected lcore 58 as core 11 on socket 0 00:03:56.230 EAL: Detected lcore 59 as core 12 on socket 0 00:03:56.230 EAL: Detected lcore 60 as core 13 on socket 0 00:03:56.230 EAL: Detected lcore 61 as core 16 on socket 0 00:03:56.230 EAL: Detected lcore 62 as core 17 on socket 0 00:03:56.230 EAL: Detected lcore 63 as core 18 on socket 0 00:03:56.230 EAL: Detected lcore 64 as core 19 on socket 0 00:03:56.230 EAL: Detected lcore 65 as core 20 on socket 0 00:03:56.230 EAL: Detected lcore 66 as core 21 on socket 0 00:03:56.230 EAL: Detected lcore 67 as core 25 on socket 0 00:03:56.230 EAL: Detected lcore 68 as core 26 on socket 0 00:03:56.230 EAL: Detected lcore 69 as core 27 on socket 0 00:03:56.230 EAL: Detected lcore 70 as core 28 on socket 0 00:03:56.230 EAL: Detected lcore 71 as core 29 on socket 0 00:03:56.230 EAL: Detected lcore 72 as core 0 on socket 1 00:03:56.230 EAL: Detected lcore 73 as core 1 on socket 1 00:03:56.230 EAL: Detected lcore 74 as core 2 on socket 1 00:03:56.230 EAL: Detected lcore 75 as core 3 on socket 1 00:03:56.230 EAL: Detected lcore 76 as core 4 on socket 1 00:03:56.230 EAL: Detected lcore 77 as core 5 on socket 1 00:03:56.230 EAL: Detected lcore 78 as core 6 on socket 1 00:03:56.230 EAL: Detected lcore 79 as core 8 on socket 1 00:03:56.230 EAL: Detected lcore 80 as core 9 on socket 1 00:03:56.230 EAL: Detected lcore 81 as core 10 on socket 1 00:03:56.230 EAL: Detected lcore 82 as core 11 on socket 1 00:03:56.230 EAL: Detected lcore 83 as core 12 on socket 1 00:03:56.230 EAL: Detected lcore 84 as core 13 on socket 1 00:03:56.230 EAL: Detected lcore 85 as core 16 on socket 1 00:03:56.230 EAL: Detected lcore 86 as core 17 on socket 1 00:03:56.230 EAL: Detected lcore 87 as core 18 on socket 1 00:03:56.230 EAL: Detected lcore 88 as core 19 on socket 1 00:03:56.230 EAL: Detected lcore 89 as core 20 on socket 1 00:03:56.230 EAL: Detected lcore 90 as core 21 on socket 1 00:03:56.230 EAL: Detected lcore 91 as core 25 on socket 1 00:03:56.230 EAL: Detected lcore 92 as core 26 on socket 1 00:03:56.230 EAL: Detected lcore 93 as core 27 on socket 1 00:03:56.230 EAL: Detected lcore 94 as core 28 on socket 1 00:03:56.230 EAL: Detected lcore 95 as core 29 on socket 1 00:03:56.230 EAL: Maximum logical cores by configuration: 128 00:03:56.230 EAL: Detected CPU lcores: 96 00:03:56.230 EAL: Detected NUMA nodes: 2 00:03:56.230 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:56.230 EAL: Detected shared linkage of DPDK 00:03:56.230 EAL: No shared files mode enabled, IPC will be disabled 00:03:56.230 EAL: Bus pci wants IOVA as 'DC' 00:03:56.230 EAL: Buses did not request a specific IOVA mode. 00:03:56.230 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:56.230 EAL: Selected IOVA mode 'VA' 00:03:56.230 EAL: Probing VFIO support... 00:03:56.230 EAL: IOMMU type 1 (Type 1) is supported 00:03:56.230 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:56.230 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:56.230 EAL: VFIO support initialized 00:03:56.230 EAL: Ask a virtual area of 0x2e000 bytes 00:03:56.230 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:56.230 EAL: Setting up physically contiguous memory... 00:03:56.230 EAL: Setting maximum number of open files to 524288 00:03:56.230 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:56.230 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:56.230 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:56.230 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.230 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:56.230 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.230 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.230 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:56.230 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:56.230 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.230 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:56.230 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.230 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.230 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:56.230 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:56.230 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.230 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:56.230 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.230 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.230 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:56.230 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:56.230 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.230 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:56.230 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.230 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.230 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:56.230 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:56.230 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:56.230 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.230 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:56.230 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.230 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.230 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:56.230 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:56.230 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.230 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:56.230 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.230 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.230 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:56.230 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:56.230 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.230 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:56.230 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.230 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.230 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:56.230 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:56.230 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.230 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:56.230 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.230 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.230 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:56.230 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:56.230 EAL: Hugepages will be freed exactly as allocated. 00:03:56.230 EAL: No shared files mode enabled, IPC is disabled 00:03:56.230 EAL: No shared files mode enabled, IPC is disabled 00:03:56.230 EAL: TSC frequency is ~2100000 KHz 00:03:56.230 EAL: Main lcore 0 is ready (tid=7f0d78e49a40;cpuset=[0]) 00:03:56.230 EAL: Trying to obtain current memory policy. 00:03:56.230 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.230 EAL: Restoring previous memory policy: 0 00:03:56.230 EAL: request: mp_malloc_sync 00:03:56.230 EAL: No shared files mode enabled, IPC is disabled 00:03:56.230 EAL: Heap on socket 0 was expanded by 2MB 00:03:56.230 EAL: No shared files mode enabled, IPC is disabled 00:03:56.230 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:56.230 EAL: Mem event callback 'spdk:(nil)' registered 00:03:56.230 00:03:56.230 00:03:56.230 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.230 http://cunit.sourceforge.net/ 00:03:56.230 00:03:56.230 00:03:56.230 Suite: components_suite 00:03:56.489 Test: vtophys_malloc_test ...passed 00:03:56.489 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:56.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.489 EAL: Restoring previous memory policy: 4 00:03:56.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.489 EAL: request: mp_malloc_sync 00:03:56.489 EAL: No shared files mode enabled, IPC is disabled 00:03:56.489 EAL: Heap on socket 0 was expanded by 4MB 00:03:56.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.489 EAL: request: mp_malloc_sync 00:03:56.489 EAL: No shared files mode enabled, IPC is disabled 00:03:56.489 EAL: Heap on socket 0 was shrunk by 4MB 00:03:56.489 EAL: Trying to obtain current memory policy. 00:03:56.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.489 EAL: Restoring previous memory policy: 4 00:03:56.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.489 EAL: request: mp_malloc_sync 00:03:56.489 EAL: No shared files mode enabled, IPC is disabled 00:03:56.489 EAL: Heap on socket 0 was expanded by 6MB 00:03:56.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.489 EAL: request: mp_malloc_sync 00:03:56.489 EAL: No shared files mode enabled, IPC is disabled 00:03:56.489 EAL: Heap on socket 0 was shrunk by 6MB 00:03:56.489 EAL: Trying to obtain current memory policy. 00:03:56.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.489 EAL: Restoring previous memory policy: 4 00:03:56.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.489 EAL: request: mp_malloc_sync 00:03:56.489 EAL: No shared files mode enabled, IPC is disabled 00:03:56.489 EAL: Heap on socket 0 was expanded by 10MB 00:03:56.489 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.489 EAL: request: mp_malloc_sync 00:03:56.489 EAL: No shared files mode enabled, IPC is disabled 00:03:56.489 EAL: Heap on socket 0 was shrunk by 10MB 00:03:56.490 EAL: Trying to obtain current memory policy. 00:03:56.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.490 EAL: Restoring previous memory policy: 4 00:03:56.490 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.490 EAL: request: mp_malloc_sync 00:03:56.490 EAL: No shared files mode enabled, IPC is disabled 00:03:56.490 EAL: Heap on socket 0 was expanded by 18MB 00:03:56.490 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.490 EAL: request: mp_malloc_sync 00:03:56.490 EAL: No shared files mode enabled, IPC is disabled 00:03:56.490 EAL: Heap on socket 0 was shrunk by 18MB 00:03:56.748 EAL: Trying to obtain current memory policy. 00:03:56.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.748 EAL: Restoring previous memory policy: 4 00:03:56.748 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.748 EAL: request: mp_malloc_sync 00:03:56.748 EAL: No shared files mode enabled, IPC is disabled 00:03:56.748 EAL: Heap on socket 0 was expanded by 34MB 00:03:56.748 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.748 EAL: request: mp_malloc_sync 00:03:56.748 EAL: No shared files mode enabled, IPC is disabled 00:03:56.748 EAL: Heap on socket 0 was shrunk by 34MB 00:03:56.748 EAL: Trying to obtain current memory policy. 00:03:56.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.748 EAL: Restoring previous memory policy: 4 00:03:56.748 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.748 EAL: request: mp_malloc_sync 00:03:56.748 EAL: No shared files mode enabled, IPC is disabled 00:03:56.748 EAL: Heap on socket 0 was expanded by 66MB 00:03:57.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.007 EAL: request: mp_malloc_sync 00:03:57.007 EAL: No shared files mode enabled, IPC is disabled 00:03:57.007 EAL: Heap on socket 0 was shrunk by 66MB 00:03:57.007 EAL: Trying to obtain current memory policy. 00:03:57.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.007 EAL: Restoring previous memory policy: 4 00:03:57.007 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.007 EAL: request: mp_malloc_sync 00:03:57.007 EAL: No shared files mode enabled, IPC is disabled 00:03:57.007 EAL: Heap on socket 0 was expanded by 130MB 00:03:57.267 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.267 EAL: request: mp_malloc_sync 00:03:57.267 EAL: No shared files mode enabled, IPC is disabled 00:03:57.267 EAL: Heap on socket 0 was shrunk by 130MB 00:03:57.526 EAL: Trying to obtain current memory policy. 00:03:57.526 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.526 EAL: Restoring previous memory policy: 4 00:03:57.526 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.526 EAL: request: mp_malloc_sync 00:03:57.526 EAL: No shared files mode enabled, IPC is disabled 00:03:57.526 EAL: Heap on socket 0 was expanded by 258MB 00:03:58.093 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.093 EAL: request: mp_malloc_sync 00:03:58.093 EAL: No shared files mode enabled, IPC is disabled 00:03:58.093 EAL: Heap on socket 0 was shrunk by 258MB 00:03:58.353 EAL: Trying to obtain current memory policy. 00:03:58.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.611 EAL: Restoring previous memory policy: 4 00:03:58.611 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.611 EAL: request: mp_malloc_sync 00:03:58.611 EAL: No shared files mode enabled, IPC is disabled 00:03:58.611 EAL: Heap on socket 0 was expanded by 514MB 00:03:59.549 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.550 EAL: request: mp_malloc_sync 00:03:59.550 EAL: No shared files mode enabled, IPC is disabled 00:03:59.550 EAL: Heap on socket 0 was shrunk by 514MB 00:04:00.486 EAL: Trying to obtain current memory policy. 00:04:00.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.486 EAL: Restoring previous memory policy: 4 00:04:00.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.486 EAL: request: mp_malloc_sync 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: Heap on socket 0 was expanded by 1026MB 00:04:02.391 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.391 EAL: request: mp_malloc_sync 00:04:02.392 EAL: No shared files mode enabled, IPC is disabled 00:04:02.392 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:04.293 passed 00:04:04.293 00:04:04.293 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.293 suites 1 1 n/a 0 0 00:04:04.293 tests 2 2 2 0 0 00:04:04.293 asserts 497 497 497 0 n/a 00:04:04.293 00:04:04.293 Elapsed time = 7.847 seconds 00:04:04.293 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.293 EAL: request: mp_malloc_sync 00:04:04.293 EAL: No shared files mode enabled, IPC is disabled 00:04:04.293 EAL: Heap on socket 0 was shrunk by 2MB 00:04:04.293 EAL: No shared files mode enabled, IPC is disabled 00:04:04.293 EAL: No shared files mode enabled, IPC is disabled 00:04:04.293 EAL: No shared files mode enabled, IPC is disabled 00:04:04.293 00:04:04.293 real 0m8.101s 00:04:04.293 user 0m7.286s 00:04:04.293 sys 0m0.762s 00:04:04.293 00:48:10 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.293 00:48:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:04.293 ************************************ 00:04:04.293 END TEST env_vtophys 00:04:04.293 ************************************ 00:04:04.293 00:48:10 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/pci/pci_ut 00:04:04.293 00:48:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.293 00:48:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.293 00:48:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.293 ************************************ 00:04:04.293 START TEST env_pci 00:04:04.293 ************************************ 00:04:04.293 00:48:10 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/pci/pci_ut 00:04:04.293 00:04:04.293 00:04:04.293 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.293 http://cunit.sourceforge.net/ 00:04:04.293 00:04:04.293 00:04:04.293 Suite: pci 00:04:04.293 Test: pci_hook ...[2024-11-19 00:48:10.833831] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 143368 has claimed it 00:04:04.293 EAL: Cannot find device (10000:00:01.0) 00:04:04.293 EAL: Failed to attach device on primary process 00:04:04.293 passed 00:04:04.293 00:04:04.293 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.293 suites 1 1 n/a 0 0 00:04:04.293 tests 1 1 1 0 0 00:04:04.293 asserts 25 25 25 0 n/a 00:04:04.293 00:04:04.293 Elapsed time = 0.049 seconds 00:04:04.293 00:04:04.293 real 0m0.126s 00:04:04.293 user 0m0.057s 00:04:04.293 sys 0m0.069s 00:04:04.293 00:48:10 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.293 00:48:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:04.293 ************************************ 00:04:04.293 END TEST env_pci 00:04:04.293 ************************************ 00:04:04.293 00:48:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:04.293 00:48:10 env -- env/env.sh@15 -- # uname 00:04:04.293 00:48:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:04.293 00:48:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:04.293 00:48:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.293 00:48:10 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:04.293 00:48:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.293 00:48:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.553 ************************************ 00:04:04.553 START TEST env_dpdk_post_init 00:04:04.553 ************************************ 00:04:04.553 00:48:11 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.553 EAL: Detected CPU lcores: 96 00:04:04.553 EAL: Detected NUMA nodes: 2 00:04:04.553 EAL: Detected shared linkage of DPDK 00:04:04.553 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.553 EAL: Selected IOVA mode 'VA' 00:04:04.553 EAL: VFIO support initialized 00:04:04.553 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.553 EAL: Using IOMMU type 1 (Type 1) 00:04:04.553 EAL: Ignore mapping IO port bar(1) 00:04:04.553 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:04.813 EAL: Ignore mapping IO port bar(1) 00:04:04.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:04.813 EAL: Ignore mapping IO port bar(1) 00:04:04.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:04.813 EAL: Ignore mapping IO port bar(1) 00:04:04.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:04.813 EAL: Ignore mapping IO port bar(1) 00:04:04.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:04.813 EAL: Ignore mapping IO port bar(1) 00:04:04.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:04.813 EAL: Ignore mapping IO port bar(1) 00:04:04.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:04.813 EAL: Ignore mapping IO port bar(1) 00:04:04.813 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:05.381 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:05.639 EAL: Ignore mapping IO port bar(1) 00:04:05.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:05.639 EAL: Ignore mapping IO port bar(1) 00:04:05.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:05.639 EAL: Ignore mapping IO port bar(1) 00:04:05.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:05.639 EAL: Ignore mapping IO port bar(1) 00:04:05.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:05.639 EAL: Ignore mapping IO port bar(1) 00:04:05.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:05.639 EAL: Ignore mapping IO port bar(1) 00:04:05.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:05.639 EAL: Ignore mapping IO port bar(1) 00:04:05.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:05.639 EAL: Ignore mapping IO port bar(1) 00:04:05.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:08.927 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:08.927 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:08.927 Starting DPDK initialization... 00:04:08.927 Starting SPDK post initialization... 00:04:08.927 SPDK NVMe probe 00:04:08.927 Attaching to 0000:5e:00.0 00:04:08.927 Attached to 0000:5e:00.0 00:04:08.927 Cleaning up... 00:04:08.927 00:04:08.927 real 0m4.468s 00:04:08.927 user 0m3.035s 00:04:08.927 sys 0m0.501s 00:04:08.927 00:48:15 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.927 00:48:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:08.927 ************************************ 00:04:08.927 END TEST env_dpdk_post_init 00:04:08.927 ************************************ 00:04:08.927 00:48:15 env -- env/env.sh@26 -- # uname 00:04:08.927 00:48:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:08.927 00:48:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.927 00:48:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.927 00:48:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.927 00:48:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.927 ************************************ 00:04:08.927 START TEST env_mem_callbacks 00:04:08.927 ************************************ 00:04:08.927 00:48:15 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.927 EAL: Detected CPU lcores: 96 00:04:08.927 EAL: Detected NUMA nodes: 2 00:04:08.927 EAL: Detected shared linkage of DPDK 00:04:08.927 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:09.187 EAL: Selected IOVA mode 'VA' 00:04:09.187 EAL: VFIO support initialized 00:04:09.187 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:09.187 00:04:09.187 00:04:09.187 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.187 http://cunit.sourceforge.net/ 00:04:09.187 00:04:09.187 00:04:09.187 Suite: memory 00:04:09.187 Test: test ... 00:04:09.187 register 0x200000200000 2097152 00:04:09.187 malloc 3145728 00:04:09.187 register 0x200000400000 4194304 00:04:09.187 buf 0x2000004fffc0 len 3145728 PASSED 00:04:09.187 malloc 64 00:04:09.187 buf 0x2000004ffec0 len 64 PASSED 00:04:09.187 malloc 4194304 00:04:09.187 register 0x200000800000 6291456 00:04:09.187 buf 0x2000009fffc0 len 4194304 PASSED 00:04:09.187 free 0x2000004fffc0 3145728 00:04:09.187 free 0x2000004ffec0 64 00:04:09.187 unregister 0x200000400000 4194304 PASSED 00:04:09.187 free 0x2000009fffc0 4194304 00:04:09.187 unregister 0x200000800000 6291456 PASSED 00:04:09.187 malloc 8388608 00:04:09.187 register 0x200000400000 10485760 00:04:09.187 buf 0x2000005fffc0 len 8388608 PASSED 00:04:09.187 free 0x2000005fffc0 8388608 00:04:09.187 unregister 0x200000400000 10485760 PASSED 00:04:09.187 passed 00:04:09.187 00:04:09.187 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.187 suites 1 1 n/a 0 0 00:04:09.187 tests 1 1 1 0 0 00:04:09.187 asserts 15 15 15 0 n/a 00:04:09.187 00:04:09.187 Elapsed time = 0.078 seconds 00:04:09.187 00:04:09.187 real 0m0.187s 00:04:09.187 user 0m0.115s 00:04:09.187 sys 0m0.068s 00:04:09.187 00:48:15 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.187 00:48:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:09.187 ************************************ 00:04:09.187 END TEST env_mem_callbacks 00:04:09.187 ************************************ 00:04:09.187 00:04:09.187 real 0m13.689s 00:04:09.187 user 0m11.000s 00:04:09.187 sys 0m1.734s 00:04:09.187 00:48:15 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.187 00:48:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.187 ************************************ 00:04:09.187 END TEST env 00:04:09.187 ************************************ 00:04:09.187 00:48:15 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/rpc.sh 00:04:09.187 00:48:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.187 00:48:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.187 00:48:15 -- common/autotest_common.sh@10 -- # set +x 00:04:09.187 ************************************ 00:04:09.187 START TEST rpc 00:04:09.187 ************************************ 00:04:09.187 00:48:15 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/rpc.sh 00:04:09.447 * Looking for test storage... 00:04:09.447 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:09.447 00:48:15 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:09.447 00:48:15 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:09.447 00:48:15 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:09.447 00:48:16 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:09.447 00:48:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.447 00:48:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.447 00:48:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.447 00:48:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.447 00:48:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.447 00:48:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.447 00:48:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.447 00:48:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.447 00:48:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.447 00:48:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.447 00:48:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.447 00:48:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:09.447 00:48:16 rpc -- scripts/common.sh@345 -- # : 1 00:04:09.447 00:48:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.447 00:48:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.447 00:48:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:09.447 00:48:16 rpc -- scripts/common.sh@353 -- # local d=1 00:04:09.447 00:48:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.447 00:48:16 rpc -- scripts/common.sh@355 -- # echo 1 00:04:09.447 00:48:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.447 00:48:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:09.447 00:48:16 rpc -- scripts/common.sh@353 -- # local d=2 00:04:09.447 00:48:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.447 00:48:16 rpc -- scripts/common.sh@355 -- # echo 2 00:04:09.448 00:48:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.448 00:48:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.448 00:48:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.448 00:48:16 rpc -- scripts/common.sh@368 -- # return 0 00:04:09.448 00:48:16 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.448 00:48:16 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:09.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.448 --rc genhtml_branch_coverage=1 00:04:09.448 --rc genhtml_function_coverage=1 00:04:09.448 --rc genhtml_legend=1 00:04:09.448 --rc geninfo_all_blocks=1 00:04:09.448 --rc geninfo_unexecuted_blocks=1 00:04:09.448 00:04:09.448 ' 00:04:09.448 00:48:16 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:09.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.448 --rc genhtml_branch_coverage=1 00:04:09.448 --rc genhtml_function_coverage=1 00:04:09.448 --rc genhtml_legend=1 00:04:09.448 --rc geninfo_all_blocks=1 00:04:09.448 --rc geninfo_unexecuted_blocks=1 00:04:09.448 00:04:09.448 ' 00:04:09.448 00:48:16 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:09.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.448 --rc genhtml_branch_coverage=1 00:04:09.448 --rc genhtml_function_coverage=1 00:04:09.448 --rc genhtml_legend=1 00:04:09.448 --rc geninfo_all_blocks=1 00:04:09.448 --rc geninfo_unexecuted_blocks=1 00:04:09.448 00:04:09.448 ' 00:04:09.448 00:48:16 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:09.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.448 --rc genhtml_branch_coverage=1 00:04:09.448 --rc genhtml_function_coverage=1 00:04:09.448 --rc genhtml_legend=1 00:04:09.448 --rc geninfo_all_blocks=1 00:04:09.448 --rc geninfo_unexecuted_blocks=1 00:04:09.448 00:04:09.448 ' 00:04:09.448 00:48:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=144411 00:04:09.448 00:48:16 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:09.448 00:48:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.448 00:48:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 144411 00:04:09.448 00:48:16 rpc -- common/autotest_common.sh@835 -- # '[' -z 144411 ']' 00:04:09.448 00:48:16 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.448 00:48:16 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.448 00:48:16 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.448 00:48:16 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.448 00:48:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.448 [2024-11-19 00:48:16.103729] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:09.448 [2024-11-19 00:48:16.103818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144411 ] 00:04:09.708 [2024-11-19 00:48:16.225103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.708 [2024-11-19 00:48:16.320535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:09.708 [2024-11-19 00:48:16.320578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 144411' to capture a snapshot of events at runtime. 00:04:09.708 [2024-11-19 00:48:16.320590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:09.708 [2024-11-19 00:48:16.320598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:09.708 [2024-11-19 00:48:16.320611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid144411 for offline analysis/debug. 00:04:09.708 [2024-11-19 00:48:16.321927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.644 00:48:17 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.644 00:48:17 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:10.644 00:48:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:10.644 00:48:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:10.644 00:48:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:10.644 00:48:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:10.644 00:48:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.644 00:48:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.644 00:48:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.644 ************************************ 00:04:10.644 START TEST rpc_integrity 00:04:10.644 ************************************ 00:04:10.644 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:10.644 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:10.644 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.644 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.644 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.644 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:10.644 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:10.644 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:10.644 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:10.644 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.644 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.644 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.644 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:10.644 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:10.644 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.645 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.645 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.645 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:10.645 { 00:04:10.645 "name": "Malloc0", 00:04:10.645 "aliases": [ 00:04:10.645 "df0fe409-5abd-46d1-820b-aacd34fcb02c" 00:04:10.645 ], 00:04:10.645 "product_name": "Malloc disk", 00:04:10.645 "block_size": 512, 00:04:10.645 "num_blocks": 16384, 00:04:10.645 "uuid": "df0fe409-5abd-46d1-820b-aacd34fcb02c", 00:04:10.645 "assigned_rate_limits": { 00:04:10.645 "rw_ios_per_sec": 0, 00:04:10.645 "rw_mbytes_per_sec": 0, 00:04:10.645 "r_mbytes_per_sec": 0, 00:04:10.645 "w_mbytes_per_sec": 0 00:04:10.645 }, 00:04:10.645 "claimed": false, 00:04:10.645 "zoned": false, 00:04:10.645 "supported_io_types": { 00:04:10.645 "read": true, 00:04:10.645 "write": true, 00:04:10.645 "unmap": true, 00:04:10.645 "flush": true, 00:04:10.645 "reset": true, 00:04:10.645 "nvme_admin": false, 00:04:10.645 "nvme_io": false, 00:04:10.645 "nvme_io_md": false, 00:04:10.645 "write_zeroes": true, 00:04:10.645 "zcopy": true, 00:04:10.645 "get_zone_info": false, 00:04:10.645 "zone_management": false, 00:04:10.645 "zone_append": false, 00:04:10.645 "compare": false, 00:04:10.645 "compare_and_write": false, 00:04:10.645 "abort": true, 00:04:10.645 "seek_hole": false, 00:04:10.645 "seek_data": false, 00:04:10.645 "copy": true, 00:04:10.645 "nvme_iov_md": false 00:04:10.645 }, 00:04:10.645 "memory_domains": [ 00:04:10.645 { 00:04:10.645 "dma_device_id": "system", 00:04:10.645 "dma_device_type": 1 00:04:10.645 }, 00:04:10.645 { 00:04:10.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.645 "dma_device_type": 2 00:04:10.645 } 00:04:10.645 ], 00:04:10.645 "driver_specific": {} 00:04:10.645 } 00:04:10.645 ]' 00:04:10.645 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:10.645 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:10.645 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:10.645 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.645 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.645 [2024-11-19 00:48:17.276772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:10.645 [2024-11-19 00:48:17.276813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:10.645 [2024-11-19 00:48:17.276835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022280 00:04:10.645 [2024-11-19 00:48:17.276845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:10.645 [2024-11-19 00:48:17.278794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:10.645 [2024-11-19 00:48:17.278820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:10.645 Passthru0 00:04:10.645 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.645 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:10.645 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.645 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.645 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.645 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:10.645 { 00:04:10.645 "name": "Malloc0", 00:04:10.645 "aliases": [ 00:04:10.645 "df0fe409-5abd-46d1-820b-aacd34fcb02c" 00:04:10.645 ], 00:04:10.645 "product_name": "Malloc disk", 00:04:10.645 "block_size": 512, 00:04:10.645 "num_blocks": 16384, 00:04:10.645 "uuid": "df0fe409-5abd-46d1-820b-aacd34fcb02c", 00:04:10.645 "assigned_rate_limits": { 00:04:10.645 "rw_ios_per_sec": 0, 00:04:10.645 "rw_mbytes_per_sec": 0, 00:04:10.645 "r_mbytes_per_sec": 0, 00:04:10.645 "w_mbytes_per_sec": 0 00:04:10.645 }, 00:04:10.645 "claimed": true, 00:04:10.645 "claim_type": "exclusive_write", 00:04:10.645 "zoned": false, 00:04:10.645 "supported_io_types": { 00:04:10.645 "read": true, 00:04:10.645 "write": true, 00:04:10.645 "unmap": true, 00:04:10.645 "flush": true, 00:04:10.645 "reset": true, 00:04:10.645 "nvme_admin": false, 00:04:10.645 "nvme_io": false, 00:04:10.645 "nvme_io_md": false, 00:04:10.645 "write_zeroes": true, 00:04:10.645 "zcopy": true, 00:04:10.645 "get_zone_info": false, 00:04:10.645 "zone_management": false, 00:04:10.645 "zone_append": false, 00:04:10.645 "compare": false, 00:04:10.645 "compare_and_write": false, 00:04:10.645 "abort": true, 00:04:10.645 "seek_hole": false, 00:04:10.645 "seek_data": false, 00:04:10.645 "copy": true, 00:04:10.645 "nvme_iov_md": false 00:04:10.645 }, 00:04:10.645 "memory_domains": [ 00:04:10.645 { 00:04:10.645 "dma_device_id": "system", 00:04:10.645 "dma_device_type": 1 00:04:10.645 }, 00:04:10.645 { 00:04:10.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.645 "dma_device_type": 2 00:04:10.645 } 00:04:10.645 ], 00:04:10.645 "driver_specific": {} 00:04:10.645 }, 00:04:10.645 { 00:04:10.645 "name": "Passthru0", 00:04:10.645 "aliases": [ 00:04:10.645 "8cd19821-f1b0-5de6-97f1-0c359f55820d" 00:04:10.645 ], 00:04:10.645 "product_name": "passthru", 00:04:10.645 "block_size": 512, 00:04:10.645 "num_blocks": 16384, 00:04:10.645 "uuid": "8cd19821-f1b0-5de6-97f1-0c359f55820d", 00:04:10.645 "assigned_rate_limits": { 00:04:10.645 "rw_ios_per_sec": 0, 00:04:10.645 "rw_mbytes_per_sec": 0, 00:04:10.645 "r_mbytes_per_sec": 0, 00:04:10.645 "w_mbytes_per_sec": 0 00:04:10.645 }, 00:04:10.645 "claimed": false, 00:04:10.645 "zoned": false, 00:04:10.645 "supported_io_types": { 00:04:10.645 "read": true, 00:04:10.645 "write": true, 00:04:10.645 "unmap": true, 00:04:10.645 "flush": true, 00:04:10.645 "reset": true, 00:04:10.645 "nvme_admin": false, 00:04:10.645 "nvme_io": false, 00:04:10.645 "nvme_io_md": false, 00:04:10.645 "write_zeroes": true, 00:04:10.645 "zcopy": true, 00:04:10.645 "get_zone_info": false, 00:04:10.645 "zone_management": false, 00:04:10.645 "zone_append": false, 00:04:10.645 "compare": false, 00:04:10.645 "compare_and_write": false, 00:04:10.645 "abort": true, 00:04:10.645 "seek_hole": false, 00:04:10.645 "seek_data": false, 00:04:10.645 "copy": true, 00:04:10.645 "nvme_iov_md": false 00:04:10.645 }, 00:04:10.645 "memory_domains": [ 00:04:10.645 { 00:04:10.645 "dma_device_id": "system", 00:04:10.645 "dma_device_type": 1 00:04:10.645 }, 00:04:10.645 { 00:04:10.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.645 "dma_device_type": 2 00:04:10.645 } 00:04:10.645 ], 00:04:10.645 "driver_specific": { 00:04:10.645 "passthru": { 00:04:10.645 "name": "Passthru0", 00:04:10.645 "base_bdev_name": "Malloc0" 00:04:10.645 } 00:04:10.645 } 00:04:10.645 } 00:04:10.645 ]' 00:04:10.645 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:10.904 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:10.904 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:10.904 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.904 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.904 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.904 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:10.904 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.904 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.904 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.905 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:10.905 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.905 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.905 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.905 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:10.905 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:10.905 00:48:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:10.905 00:04:10.905 real 0m0.317s 00:04:10.905 user 0m0.181s 00:04:10.905 sys 0m0.037s 00:04:10.905 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.905 00:48:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.905 ************************************ 00:04:10.905 END TEST rpc_integrity 00:04:10.905 ************************************ 00:04:10.905 00:48:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:10.905 00:48:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.905 00:48:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.905 00:48:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.905 ************************************ 00:04:10.905 START TEST rpc_plugins 00:04:10.905 ************************************ 00:04:10.905 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:10.905 00:48:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:10.905 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.905 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:10.905 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.905 00:48:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:10.905 00:48:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:10.905 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.905 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:10.905 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.905 00:48:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:10.905 { 00:04:10.905 "name": "Malloc1", 00:04:10.905 "aliases": [ 00:04:10.905 "6f1dbc6d-7fc8-4362-b835-cb9143a2eaef" 00:04:10.905 ], 00:04:10.905 "product_name": "Malloc disk", 00:04:10.905 "block_size": 4096, 00:04:10.905 "num_blocks": 256, 00:04:10.905 "uuid": "6f1dbc6d-7fc8-4362-b835-cb9143a2eaef", 00:04:10.905 "assigned_rate_limits": { 00:04:10.905 "rw_ios_per_sec": 0, 00:04:10.905 "rw_mbytes_per_sec": 0, 00:04:10.905 "r_mbytes_per_sec": 0, 00:04:10.905 "w_mbytes_per_sec": 0 00:04:10.905 }, 00:04:10.905 "claimed": false, 00:04:10.905 "zoned": false, 00:04:10.905 "supported_io_types": { 00:04:10.905 "read": true, 00:04:10.905 "write": true, 00:04:10.905 "unmap": true, 00:04:10.905 "flush": true, 00:04:10.905 "reset": true, 00:04:10.905 "nvme_admin": false, 00:04:10.905 "nvme_io": false, 00:04:10.905 "nvme_io_md": false, 00:04:10.905 "write_zeroes": true, 00:04:10.905 "zcopy": true, 00:04:10.905 "get_zone_info": false, 00:04:10.905 "zone_management": false, 00:04:10.905 "zone_append": false, 00:04:10.905 "compare": false, 00:04:10.905 "compare_and_write": false, 00:04:10.905 "abort": true, 00:04:10.905 "seek_hole": false, 00:04:10.905 "seek_data": false, 00:04:10.905 "copy": true, 00:04:10.905 "nvme_iov_md": false 00:04:10.905 }, 00:04:10.905 "memory_domains": [ 00:04:10.905 { 00:04:10.905 "dma_device_id": "system", 00:04:10.905 "dma_device_type": 1 00:04:10.905 }, 00:04:10.905 { 00:04:10.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.905 "dma_device_type": 2 00:04:10.905 } 00:04:10.905 ], 00:04:10.905 "driver_specific": {} 00:04:10.905 } 00:04:10.905 ]' 00:04:10.905 00:48:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:11.164 00:48:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:11.164 00:48:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:11.164 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.164 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.164 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.164 00:48:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:11.164 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.164 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.164 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.164 00:48:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:11.164 00:48:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:11.164 00:48:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:11.164 00:04:11.164 real 0m0.150s 00:04:11.164 user 0m0.085s 00:04:11.164 sys 0m0.021s 00:04:11.164 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.164 00:48:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.164 ************************************ 00:04:11.164 END TEST rpc_plugins 00:04:11.164 ************************************ 00:04:11.164 00:48:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:11.164 00:48:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.164 00:48:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.164 00:48:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.164 ************************************ 00:04:11.164 START TEST rpc_trace_cmd_test 00:04:11.164 ************************************ 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:11.164 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid144411", 00:04:11.164 "tpoint_group_mask": "0x8", 00:04:11.164 "iscsi_conn": { 00:04:11.164 "mask": "0x2", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "scsi": { 00:04:11.164 "mask": "0x4", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "bdev": { 00:04:11.164 "mask": "0x8", 00:04:11.164 "tpoint_mask": "0xffffffffffffffff" 00:04:11.164 }, 00:04:11.164 "nvmf_rdma": { 00:04:11.164 "mask": "0x10", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "nvmf_tcp": { 00:04:11.164 "mask": "0x20", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "ftl": { 00:04:11.164 "mask": "0x40", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "blobfs": { 00:04:11.164 "mask": "0x80", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "dsa": { 00:04:11.164 "mask": "0x200", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "thread": { 00:04:11.164 "mask": "0x400", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "nvme_pcie": { 00:04:11.164 "mask": "0x800", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "iaa": { 00:04:11.164 "mask": "0x1000", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "nvme_tcp": { 00:04:11.164 "mask": "0x2000", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "bdev_nvme": { 00:04:11.164 "mask": "0x4000", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "sock": { 00:04:11.164 "mask": "0x8000", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "blob": { 00:04:11.164 "mask": "0x10000", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "bdev_raid": { 00:04:11.164 "mask": "0x20000", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 }, 00:04:11.164 "scheduler": { 00:04:11.164 "mask": "0x40000", 00:04:11.164 "tpoint_mask": "0x0" 00:04:11.164 } 00:04:11.164 }' 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:11.164 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:11.423 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:11.423 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:11.423 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:11.423 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:11.423 00:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:11.423 00:04:11.423 real 0m0.233s 00:04:11.423 user 0m0.194s 00:04:11.423 sys 0m0.029s 00:04:11.423 00:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.423 00:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.423 ************************************ 00:04:11.423 END TEST rpc_trace_cmd_test 00:04:11.423 ************************************ 00:04:11.423 00:48:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:11.423 00:48:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:11.423 00:48:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:11.423 00:48:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.423 00:48:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.423 00:48:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.423 ************************************ 00:04:11.423 START TEST rpc_daemon_integrity 00:04:11.423 ************************************ 00:04:11.423 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:11.423 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.423 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.423 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.423 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.423 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.423 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:11.423 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.423 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.423 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.423 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.683 { 00:04:11.683 "name": "Malloc2", 00:04:11.683 "aliases": [ 00:04:11.683 "b00dca13-7818-4e45-91de-329ee849e0c5" 00:04:11.683 ], 00:04:11.683 "product_name": "Malloc disk", 00:04:11.683 "block_size": 512, 00:04:11.683 "num_blocks": 16384, 00:04:11.683 "uuid": "b00dca13-7818-4e45-91de-329ee849e0c5", 00:04:11.683 "assigned_rate_limits": { 00:04:11.683 "rw_ios_per_sec": 0, 00:04:11.683 "rw_mbytes_per_sec": 0, 00:04:11.683 "r_mbytes_per_sec": 0, 00:04:11.683 "w_mbytes_per_sec": 0 00:04:11.683 }, 00:04:11.683 "claimed": false, 00:04:11.683 "zoned": false, 00:04:11.683 "supported_io_types": { 00:04:11.683 "read": true, 00:04:11.683 "write": true, 00:04:11.683 "unmap": true, 00:04:11.683 "flush": true, 00:04:11.683 "reset": true, 00:04:11.683 "nvme_admin": false, 00:04:11.683 "nvme_io": false, 00:04:11.683 "nvme_io_md": false, 00:04:11.683 "write_zeroes": true, 00:04:11.683 "zcopy": true, 00:04:11.683 "get_zone_info": false, 00:04:11.683 "zone_management": false, 00:04:11.683 "zone_append": false, 00:04:11.683 "compare": false, 00:04:11.683 "compare_and_write": false, 00:04:11.683 "abort": true, 00:04:11.683 "seek_hole": false, 00:04:11.683 "seek_data": false, 00:04:11.683 "copy": true, 00:04:11.683 "nvme_iov_md": false 00:04:11.683 }, 00:04:11.683 "memory_domains": [ 00:04:11.683 { 00:04:11.683 "dma_device_id": "system", 00:04:11.683 "dma_device_type": 1 00:04:11.683 }, 00:04:11.683 { 00:04:11.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.683 "dma_device_type": 2 00:04:11.683 } 00:04:11.683 ], 00:04:11.683 "driver_specific": {} 00:04:11.683 } 00:04:11.683 ]' 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.683 [2024-11-19 00:48:18.184535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:11.683 [2024-11-19 00:48:18.184572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.683 [2024-11-19 00:48:18.184593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023480 00:04:11.683 [2024-11-19 00:48:18.184602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.683 [2024-11-19 00:48:18.186513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.683 [2024-11-19 00:48:18.186537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.683 Passthru0 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.683 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.683 { 00:04:11.683 "name": "Malloc2", 00:04:11.683 "aliases": [ 00:04:11.683 "b00dca13-7818-4e45-91de-329ee849e0c5" 00:04:11.683 ], 00:04:11.683 "product_name": "Malloc disk", 00:04:11.683 "block_size": 512, 00:04:11.683 "num_blocks": 16384, 00:04:11.683 "uuid": "b00dca13-7818-4e45-91de-329ee849e0c5", 00:04:11.683 "assigned_rate_limits": { 00:04:11.683 "rw_ios_per_sec": 0, 00:04:11.683 "rw_mbytes_per_sec": 0, 00:04:11.683 "r_mbytes_per_sec": 0, 00:04:11.683 "w_mbytes_per_sec": 0 00:04:11.683 }, 00:04:11.683 "claimed": true, 00:04:11.683 "claim_type": "exclusive_write", 00:04:11.683 "zoned": false, 00:04:11.683 "supported_io_types": { 00:04:11.683 "read": true, 00:04:11.683 "write": true, 00:04:11.683 "unmap": true, 00:04:11.683 "flush": true, 00:04:11.683 "reset": true, 00:04:11.683 "nvme_admin": false, 00:04:11.683 "nvme_io": false, 00:04:11.683 "nvme_io_md": false, 00:04:11.683 "write_zeroes": true, 00:04:11.683 "zcopy": true, 00:04:11.683 "get_zone_info": false, 00:04:11.683 "zone_management": false, 00:04:11.683 "zone_append": false, 00:04:11.683 "compare": false, 00:04:11.684 "compare_and_write": false, 00:04:11.684 "abort": true, 00:04:11.684 "seek_hole": false, 00:04:11.684 "seek_data": false, 00:04:11.684 "copy": true, 00:04:11.684 "nvme_iov_md": false 00:04:11.684 }, 00:04:11.684 "memory_domains": [ 00:04:11.684 { 00:04:11.684 "dma_device_id": "system", 00:04:11.684 "dma_device_type": 1 00:04:11.684 }, 00:04:11.684 { 00:04:11.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.684 "dma_device_type": 2 00:04:11.684 } 00:04:11.684 ], 00:04:11.684 "driver_specific": {} 00:04:11.684 }, 00:04:11.684 { 00:04:11.684 "name": "Passthru0", 00:04:11.684 "aliases": [ 00:04:11.684 "23211514-88f9-507d-b554-e753e0f81ac6" 00:04:11.684 ], 00:04:11.684 "product_name": "passthru", 00:04:11.684 "block_size": 512, 00:04:11.684 "num_blocks": 16384, 00:04:11.684 "uuid": "23211514-88f9-507d-b554-e753e0f81ac6", 00:04:11.684 "assigned_rate_limits": { 00:04:11.684 "rw_ios_per_sec": 0, 00:04:11.684 "rw_mbytes_per_sec": 0, 00:04:11.684 "r_mbytes_per_sec": 0, 00:04:11.684 "w_mbytes_per_sec": 0 00:04:11.684 }, 00:04:11.684 "claimed": false, 00:04:11.684 "zoned": false, 00:04:11.684 "supported_io_types": { 00:04:11.684 "read": true, 00:04:11.684 "write": true, 00:04:11.684 "unmap": true, 00:04:11.684 "flush": true, 00:04:11.684 "reset": true, 00:04:11.684 "nvme_admin": false, 00:04:11.684 "nvme_io": false, 00:04:11.684 "nvme_io_md": false, 00:04:11.684 "write_zeroes": true, 00:04:11.684 "zcopy": true, 00:04:11.684 "get_zone_info": false, 00:04:11.684 "zone_management": false, 00:04:11.684 "zone_append": false, 00:04:11.684 "compare": false, 00:04:11.684 "compare_and_write": false, 00:04:11.684 "abort": true, 00:04:11.684 "seek_hole": false, 00:04:11.684 "seek_data": false, 00:04:11.684 "copy": true, 00:04:11.684 "nvme_iov_md": false 00:04:11.684 }, 00:04:11.684 "memory_domains": [ 00:04:11.684 { 00:04:11.684 "dma_device_id": "system", 00:04:11.684 "dma_device_type": 1 00:04:11.684 }, 00:04:11.684 { 00:04:11.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.684 "dma_device_type": 2 00:04:11.684 } 00:04:11.684 ], 00:04:11.684 "driver_specific": { 00:04:11.684 "passthru": { 00:04:11.684 "name": "Passthru0", 00:04:11.684 "base_bdev_name": "Malloc2" 00:04:11.684 } 00:04:11.684 } 00:04:11.684 } 00:04:11.684 ]' 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.684 00:04:11.684 real 0m0.312s 00:04:11.684 user 0m0.181s 00:04:11.684 sys 0m0.038s 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.684 00:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.684 ************************************ 00:04:11.684 END TEST rpc_daemon_integrity 00:04:11.684 ************************************ 00:04:11.943 00:48:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:11.943 00:48:18 rpc -- rpc/rpc.sh@84 -- # killprocess 144411 00:04:11.943 00:48:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 144411 ']' 00:04:11.943 00:48:18 rpc -- common/autotest_common.sh@958 -- # kill -0 144411 00:04:11.943 00:48:18 rpc -- common/autotest_common.sh@959 -- # uname 00:04:11.943 00:48:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.943 00:48:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 144411 00:04:11.943 00:48:18 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.943 00:48:18 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.943 00:48:18 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 144411' 00:04:11.943 killing process with pid 144411 00:04:11.943 00:48:18 rpc -- common/autotest_common.sh@973 -- # kill 144411 00:04:11.943 00:48:18 rpc -- common/autotest_common.sh@978 -- # wait 144411 00:04:14.479 00:04:14.479 real 0m4.893s 00:04:14.479 user 0m5.508s 00:04:14.479 sys 0m0.841s 00:04:14.479 00:48:20 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.479 00:48:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.479 ************************************ 00:04:14.479 END TEST rpc 00:04:14.479 ************************************ 00:04:14.479 00:48:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.479 00:48:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.479 00:48:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.479 00:48:20 -- common/autotest_common.sh@10 -- # set +x 00:04:14.479 ************************************ 00:04:14.479 START TEST skip_rpc 00:04:14.479 ************************************ 00:04:14.479 00:48:20 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.479 * Looking for test storage... 00:04:14.479 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:14.479 00:48:20 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.479 00:48:20 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.479 00:48:20 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.479 00:48:20 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.479 00:48:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:14.479 00:48:20 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.479 00:48:20 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.479 --rc genhtml_branch_coverage=1 00:04:14.479 --rc genhtml_function_coverage=1 00:04:14.479 --rc genhtml_legend=1 00:04:14.479 --rc geninfo_all_blocks=1 00:04:14.479 --rc geninfo_unexecuted_blocks=1 00:04:14.479 00:04:14.479 ' 00:04:14.480 00:48:20 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.480 --rc genhtml_branch_coverage=1 00:04:14.480 --rc genhtml_function_coverage=1 00:04:14.480 --rc genhtml_legend=1 00:04:14.480 --rc geninfo_all_blocks=1 00:04:14.480 --rc geninfo_unexecuted_blocks=1 00:04:14.480 00:04:14.480 ' 00:04:14.480 00:48:20 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.480 --rc genhtml_branch_coverage=1 00:04:14.480 --rc genhtml_function_coverage=1 00:04:14.480 --rc genhtml_legend=1 00:04:14.480 --rc geninfo_all_blocks=1 00:04:14.480 --rc geninfo_unexecuted_blocks=1 00:04:14.480 00:04:14.480 ' 00:04:14.480 00:48:20 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.480 --rc genhtml_branch_coverage=1 00:04:14.480 --rc genhtml_function_coverage=1 00:04:14.480 --rc genhtml_legend=1 00:04:14.480 --rc geninfo_all_blocks=1 00:04:14.480 --rc geninfo_unexecuted_blocks=1 00:04:14.480 00:04:14.480 ' 00:04:14.480 00:48:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:14.480 00:48:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:14.480 00:48:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:14.480 00:48:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.480 00:48:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.480 00:48:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.480 ************************************ 00:04:14.480 START TEST skip_rpc 00:04:14.480 ************************************ 00:04:14.480 00:48:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:14.480 00:48:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=145281 00:04:14.480 00:48:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.480 00:48:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:14.480 00:48:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:14.480 [2024-11-19 00:48:21.103366] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:14.480 [2024-11-19 00:48:21.103448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145281 ] 00:04:14.739 [2024-11-19 00:48:21.228999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.739 [2024-11-19 00:48:21.334671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 145281 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 145281 ']' 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 145281 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145281 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145281' 00:04:20.012 killing process with pid 145281 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 145281 00:04:20.012 00:48:26 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 145281 00:04:21.917 00:04:21.918 real 0m7.349s 00:04:21.918 user 0m6.977s 00:04:21.918 sys 0m0.403s 00:04:21.918 00:48:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.918 00:48:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.918 ************************************ 00:04:21.918 END TEST skip_rpc 00:04:21.918 ************************************ 00:04:21.918 00:48:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:21.918 00:48:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.918 00:48:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.918 00:48:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.918 ************************************ 00:04:21.918 START TEST skip_rpc_with_json 00:04:21.918 ************************************ 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=146666 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 146666 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 146666 ']' 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.918 00:48:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.918 [2024-11-19 00:48:28.521372] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:21.918 [2024-11-19 00:48:28.521463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146666 ] 00:04:22.177 [2024-11-19 00:48:28.647845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.177 [2024-11-19 00:48:28.750871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.113 [2024-11-19 00:48:29.591581] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:23.113 request: 00:04:23.113 { 00:04:23.113 "trtype": "tcp", 00:04:23.113 "method": "nvmf_get_transports", 00:04:23.113 "req_id": 1 00:04:23.113 } 00:04:23.113 Got JSON-RPC error response 00:04:23.113 response: 00:04:23.113 { 00:04:23.113 "code": -19, 00:04:23.113 "message": "No such device" 00:04:23.113 } 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.113 [2024-11-19 00:48:29.599693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.113 00:48:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:23.113 { 00:04:23.113 "subsystems": [ 00:04:23.113 { 00:04:23.113 "subsystem": "fsdev", 00:04:23.113 "config": [ 00:04:23.113 { 00:04:23.113 "method": "fsdev_set_opts", 00:04:23.113 "params": { 00:04:23.113 "fsdev_io_pool_size": 65535, 00:04:23.113 "fsdev_io_cache_size": 256 00:04:23.113 } 00:04:23.113 } 00:04:23.113 ] 00:04:23.113 }, 00:04:23.113 { 00:04:23.113 "subsystem": "keyring", 00:04:23.113 "config": [] 00:04:23.113 }, 00:04:23.113 { 00:04:23.113 "subsystem": "iobuf", 00:04:23.113 "config": [ 00:04:23.113 { 00:04:23.113 "method": "iobuf_set_options", 00:04:23.113 "params": { 00:04:23.113 "small_pool_count": 8192, 00:04:23.113 "large_pool_count": 1024, 00:04:23.113 "small_bufsize": 8192, 00:04:23.113 "large_bufsize": 135168, 00:04:23.113 "enable_numa": false 00:04:23.113 } 00:04:23.113 } 00:04:23.113 ] 00:04:23.113 }, 00:04:23.113 { 00:04:23.113 "subsystem": "sock", 00:04:23.113 "config": [ 00:04:23.113 { 00:04:23.113 "method": "sock_set_default_impl", 00:04:23.113 "params": { 00:04:23.113 "impl_name": "posix" 00:04:23.113 } 00:04:23.113 }, 00:04:23.113 { 00:04:23.113 "method": "sock_impl_set_options", 00:04:23.113 "params": { 00:04:23.113 "impl_name": "ssl", 00:04:23.113 "recv_buf_size": 4096, 00:04:23.113 "send_buf_size": 4096, 00:04:23.113 "enable_recv_pipe": true, 00:04:23.113 "enable_quickack": false, 00:04:23.113 "enable_placement_id": 0, 00:04:23.113 "enable_zerocopy_send_server": true, 00:04:23.113 "enable_zerocopy_send_client": false, 00:04:23.113 "zerocopy_threshold": 0, 00:04:23.113 "tls_version": 0, 00:04:23.113 "enable_ktls": false 00:04:23.113 } 00:04:23.113 }, 00:04:23.113 { 00:04:23.113 "method": "sock_impl_set_options", 00:04:23.113 "params": { 00:04:23.113 "impl_name": "posix", 00:04:23.113 "recv_buf_size": 2097152, 00:04:23.113 "send_buf_size": 2097152, 00:04:23.113 "enable_recv_pipe": true, 00:04:23.113 "enable_quickack": false, 00:04:23.113 "enable_placement_id": 0, 00:04:23.113 "enable_zerocopy_send_server": true, 00:04:23.113 "enable_zerocopy_send_client": false, 00:04:23.113 "zerocopy_threshold": 0, 00:04:23.113 "tls_version": 0, 00:04:23.113 "enable_ktls": false 00:04:23.113 } 00:04:23.113 } 00:04:23.113 ] 00:04:23.113 }, 00:04:23.113 { 00:04:23.113 "subsystem": "vmd", 00:04:23.113 "config": [] 00:04:23.113 }, 00:04:23.113 { 00:04:23.113 "subsystem": "accel", 00:04:23.113 "config": [ 00:04:23.113 { 00:04:23.113 "method": "accel_set_options", 00:04:23.113 "params": { 00:04:23.113 "small_cache_size": 128, 00:04:23.113 "large_cache_size": 16, 00:04:23.113 "task_count": 2048, 00:04:23.113 "sequence_count": 2048, 00:04:23.113 "buf_count": 2048 00:04:23.113 } 00:04:23.113 } 00:04:23.113 ] 00:04:23.113 }, 00:04:23.113 { 00:04:23.114 "subsystem": "bdev", 00:04:23.114 "config": [ 00:04:23.114 { 00:04:23.114 "method": "bdev_set_options", 00:04:23.114 "params": { 00:04:23.114 "bdev_io_pool_size": 65535, 00:04:23.114 "bdev_io_cache_size": 256, 00:04:23.114 "bdev_auto_examine": true, 00:04:23.114 "iobuf_small_cache_size": 128, 00:04:23.114 "iobuf_large_cache_size": 16 00:04:23.114 } 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "method": "bdev_raid_set_options", 00:04:23.114 "params": { 00:04:23.114 "process_window_size_kb": 1024, 00:04:23.114 "process_max_bandwidth_mb_sec": 0 00:04:23.114 } 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "method": "bdev_iscsi_set_options", 00:04:23.114 "params": { 00:04:23.114 "timeout_sec": 30 00:04:23.114 } 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "method": "bdev_nvme_set_options", 00:04:23.114 "params": { 00:04:23.114 "action_on_timeout": "none", 00:04:23.114 "timeout_us": 0, 00:04:23.114 "timeout_admin_us": 0, 00:04:23.114 "keep_alive_timeout_ms": 10000, 00:04:23.114 "arbitration_burst": 0, 00:04:23.114 "low_priority_weight": 0, 00:04:23.114 "medium_priority_weight": 0, 00:04:23.114 "high_priority_weight": 0, 00:04:23.114 "nvme_adminq_poll_period_us": 10000, 00:04:23.114 "nvme_ioq_poll_period_us": 0, 00:04:23.114 "io_queue_requests": 0, 00:04:23.114 "delay_cmd_submit": true, 00:04:23.114 "transport_retry_count": 4, 00:04:23.114 "bdev_retry_count": 3, 00:04:23.114 "transport_ack_timeout": 0, 00:04:23.114 "ctrlr_loss_timeout_sec": 0, 00:04:23.114 "reconnect_delay_sec": 0, 00:04:23.114 "fast_io_fail_timeout_sec": 0, 00:04:23.114 "disable_auto_failback": false, 00:04:23.114 "generate_uuids": false, 00:04:23.114 "transport_tos": 0, 00:04:23.114 "nvme_error_stat": false, 00:04:23.114 "rdma_srq_size": 0, 00:04:23.114 "io_path_stat": false, 00:04:23.114 "allow_accel_sequence": false, 00:04:23.114 "rdma_max_cq_size": 0, 00:04:23.114 "rdma_cm_event_timeout_ms": 0, 00:04:23.114 "dhchap_digests": [ 00:04:23.114 "sha256", 00:04:23.114 "sha384", 00:04:23.114 "sha512" 00:04:23.114 ], 00:04:23.114 "dhchap_dhgroups": [ 00:04:23.114 "null", 00:04:23.114 "ffdhe2048", 00:04:23.114 "ffdhe3072", 00:04:23.114 "ffdhe4096", 00:04:23.114 "ffdhe6144", 00:04:23.114 "ffdhe8192" 00:04:23.114 ] 00:04:23.114 } 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "method": "bdev_nvme_set_hotplug", 00:04:23.114 "params": { 00:04:23.114 "period_us": 100000, 00:04:23.114 "enable": false 00:04:23.114 } 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "method": "bdev_wait_for_examine" 00:04:23.114 } 00:04:23.114 ] 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "subsystem": "scsi", 00:04:23.114 "config": null 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "subsystem": "scheduler", 00:04:23.114 "config": [ 00:04:23.114 { 00:04:23.114 "method": "framework_set_scheduler", 00:04:23.114 "params": { 00:04:23.114 "name": "static" 00:04:23.114 } 00:04:23.114 } 00:04:23.114 ] 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "subsystem": "vhost_scsi", 00:04:23.114 "config": [] 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "subsystem": "vhost_blk", 00:04:23.114 "config": [] 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "subsystem": "ublk", 00:04:23.114 "config": [] 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "subsystem": "nbd", 00:04:23.114 "config": [] 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "subsystem": "nvmf", 00:04:23.114 "config": [ 00:04:23.114 { 00:04:23.114 "method": "nvmf_set_config", 00:04:23.114 "params": { 00:04:23.114 "discovery_filter": "match_any", 00:04:23.114 "admin_cmd_passthru": { 00:04:23.114 "identify_ctrlr": false 00:04:23.114 }, 00:04:23.114 "dhchap_digests": [ 00:04:23.114 "sha256", 00:04:23.114 "sha384", 00:04:23.114 "sha512" 00:04:23.114 ], 00:04:23.114 "dhchap_dhgroups": [ 00:04:23.114 "null", 00:04:23.114 "ffdhe2048", 00:04:23.114 "ffdhe3072", 00:04:23.114 "ffdhe4096", 00:04:23.114 "ffdhe6144", 00:04:23.114 "ffdhe8192" 00:04:23.114 ] 00:04:23.114 } 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "method": "nvmf_set_max_subsystems", 00:04:23.114 "params": { 00:04:23.114 "max_subsystems": 1024 00:04:23.114 } 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "method": "nvmf_set_crdt", 00:04:23.114 "params": { 00:04:23.114 "crdt1": 0, 00:04:23.114 "crdt2": 0, 00:04:23.114 "crdt3": 0 00:04:23.114 } 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "method": "nvmf_create_transport", 00:04:23.114 "params": { 00:04:23.114 "trtype": "TCP", 00:04:23.114 "max_queue_depth": 128, 00:04:23.114 "max_io_qpairs_per_ctrlr": 127, 00:04:23.114 "in_capsule_data_size": 4096, 00:04:23.114 "max_io_size": 131072, 00:04:23.114 "io_unit_size": 131072, 00:04:23.114 "max_aq_depth": 128, 00:04:23.114 "num_shared_buffers": 511, 00:04:23.114 "buf_cache_size": 4294967295, 00:04:23.114 "dif_insert_or_strip": false, 00:04:23.114 "zcopy": false, 00:04:23.114 "c2h_success": true, 00:04:23.114 "sock_priority": 0, 00:04:23.114 "abort_timeout_sec": 1, 00:04:23.114 "ack_timeout": 0, 00:04:23.114 "data_wr_pool_size": 0 00:04:23.114 } 00:04:23.114 } 00:04:23.114 ] 00:04:23.114 }, 00:04:23.114 { 00:04:23.114 "subsystem": "iscsi", 00:04:23.114 "config": [ 00:04:23.114 { 00:04:23.114 "method": "iscsi_set_options", 00:04:23.114 "params": { 00:04:23.114 "node_base": "iqn.2016-06.io.spdk", 00:04:23.114 "max_sessions": 128, 00:04:23.114 "max_connections_per_session": 2, 00:04:23.114 "max_queue_depth": 64, 00:04:23.114 "default_time2wait": 2, 00:04:23.114 "default_time2retain": 20, 00:04:23.114 "first_burst_length": 8192, 00:04:23.114 "immediate_data": true, 00:04:23.114 "allow_duplicated_isid": false, 00:04:23.114 "error_recovery_level": 0, 00:04:23.114 "nop_timeout": 60, 00:04:23.114 "nop_in_interval": 30, 00:04:23.114 "disable_chap": false, 00:04:23.114 "require_chap": false, 00:04:23.114 "mutual_chap": false, 00:04:23.114 "chap_group": 0, 00:04:23.114 "max_large_datain_per_connection": 64, 00:04:23.114 "max_r2t_per_connection": 4, 00:04:23.114 "pdu_pool_size": 36864, 00:04:23.114 "immediate_data_pool_size": 16384, 00:04:23.114 "data_out_pool_size": 2048 00:04:23.114 } 00:04:23.114 } 00:04:23.114 ] 00:04:23.114 } 00:04:23.114 ] 00:04:23.114 } 00:04:23.114 00:48:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:23.114 00:48:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 146666 00:04:23.114 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 146666 ']' 00:04:23.114 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 146666 00:04:23.114 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:23.114 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.114 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146666 00:04:23.374 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.374 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.374 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146666' 00:04:23.374 killing process with pid 146666 00:04:23.374 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 146666 00:04:23.374 00:48:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 146666 00:04:25.910 00:48:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=147170 00:04:25.910 00:48:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:25.910 00:48:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:31.183 00:48:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 147170 00:04:31.183 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 147170 ']' 00:04:31.183 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 147170 00:04:31.183 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:31.183 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.183 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 147170 00:04:31.183 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.183 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.183 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 147170' 00:04:31.183 killing process with pid 147170 00:04:31.183 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 147170 00:04:31.183 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 147170 00:04:33.090 00:48:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:33.090 00:48:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:33.090 00:04:33.090 real 0m11.045s 00:04:33.091 user 0m10.653s 00:04:33.091 sys 0m0.851s 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.091 ************************************ 00:04:33.091 END TEST skip_rpc_with_json 00:04:33.091 ************************************ 00:04:33.091 00:48:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:33.091 00:48:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.091 00:48:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.091 00:48:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.091 ************************************ 00:04:33.091 START TEST skip_rpc_with_delay 00:04:33.091 ************************************ 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.091 [2024-11-19 00:48:39.633441] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:33.091 00:04:33.091 real 0m0.142s 00:04:33.091 user 0m0.078s 00:04:33.091 sys 0m0.063s 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.091 00:48:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:33.091 ************************************ 00:04:33.091 END TEST skip_rpc_with_delay 00:04:33.091 ************************************ 00:04:33.091 00:48:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:33.091 00:48:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:33.091 00:48:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:33.091 00:48:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.091 00:48:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.091 00:48:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.091 ************************************ 00:04:33.091 START TEST exit_on_failed_rpc_init 00:04:33.091 ************************************ 00:04:33.091 00:48:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:33.091 00:48:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=148554 00:04:33.091 00:48:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.091 00:48:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 148554 00:04:33.091 00:48:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 148554 ']' 00:04:33.091 00:48:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.091 00:48:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.091 00:48:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.091 00:48:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.091 00:48:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:33.350 [2024-11-19 00:48:39.846284] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:33.350 [2024-11-19 00:48:39.846381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148554 ] 00:04:33.350 [2024-11-19 00:48:39.970489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.609 [2024-11-19 00:48:40.087772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:34.547 00:48:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.547 [2024-11-19 00:48:41.013146] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:34.547 [2024-11-19 00:48:41.013226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148783 ] 00:04:34.547 [2024-11-19 00:48:41.134415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.547 [2024-11-19 00:48:41.239107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.547 [2024-11-19 00:48:41.239194] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:34.547 [2024-11-19 00:48:41.239212] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:34.547 [2024-11-19 00:48:41.239222] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 148554 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 148554 ']' 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 148554 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.806 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 148554 00:04:35.066 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.066 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.066 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 148554' 00:04:35.066 killing process with pid 148554 00:04:35.066 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 148554 00:04:35.066 00:48:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 148554 00:04:37.611 00:04:37.611 real 0m4.064s 00:04:37.611 user 0m4.432s 00:04:37.611 sys 0m0.598s 00:04:37.611 00:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.611 00:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.611 ************************************ 00:04:37.611 END TEST exit_on_failed_rpc_init 00:04:37.611 ************************************ 00:04:37.611 00:48:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:37.611 00:04:37.611 real 0m23.056s 00:04:37.611 user 0m22.358s 00:04:37.611 sys 0m2.184s 00:04:37.611 00:48:43 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.611 00:48:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.611 ************************************ 00:04:37.611 END TEST skip_rpc 00:04:37.611 ************************************ 00:04:37.611 00:48:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:37.611 00:48:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.611 00:48:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.611 00:48:43 -- common/autotest_common.sh@10 -- # set +x 00:04:37.611 ************************************ 00:04:37.611 START TEST rpc_client 00:04:37.611 ************************************ 00:04:37.611 00:48:43 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:37.611 * Looking for test storage... 00:04:37.611 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client 00:04:37.611 00:48:44 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.611 00:48:44 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.611 00:48:44 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.611 00:48:44 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.611 00:48:44 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:37.611 00:48:44 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.611 00:48:44 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.611 --rc genhtml_branch_coverage=1 00:04:37.611 --rc genhtml_function_coverage=1 00:04:37.611 --rc genhtml_legend=1 00:04:37.611 --rc geninfo_all_blocks=1 00:04:37.611 --rc geninfo_unexecuted_blocks=1 00:04:37.611 00:04:37.611 ' 00:04:37.611 00:48:44 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.611 --rc genhtml_branch_coverage=1 00:04:37.611 --rc genhtml_function_coverage=1 00:04:37.611 --rc genhtml_legend=1 00:04:37.611 --rc geninfo_all_blocks=1 00:04:37.611 --rc geninfo_unexecuted_blocks=1 00:04:37.611 00:04:37.611 ' 00:04:37.611 00:48:44 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.611 --rc genhtml_branch_coverage=1 00:04:37.611 --rc genhtml_function_coverage=1 00:04:37.611 --rc genhtml_legend=1 00:04:37.611 --rc geninfo_all_blocks=1 00:04:37.611 --rc geninfo_unexecuted_blocks=1 00:04:37.611 00:04:37.611 ' 00:04:37.611 00:48:44 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.611 --rc genhtml_branch_coverage=1 00:04:37.611 --rc genhtml_function_coverage=1 00:04:37.611 --rc genhtml_legend=1 00:04:37.611 --rc geninfo_all_blocks=1 00:04:37.611 --rc geninfo_unexecuted_blocks=1 00:04:37.611 00:04:37.611 ' 00:04:37.611 00:48:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:37.611 OK 00:04:37.611 00:48:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:37.611 00:04:37.611 real 0m0.236s 00:04:37.611 user 0m0.134s 00:04:37.611 sys 0m0.117s 00:04:37.611 00:48:44 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.611 00:48:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:37.611 ************************************ 00:04:37.611 END TEST rpc_client 00:04:37.611 ************************************ 00:04:37.611 00:48:44 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config.sh 00:04:37.611 00:48:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.611 00:48:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.611 00:48:44 -- common/autotest_common.sh@10 -- # set +x 00:04:37.611 ************************************ 00:04:37.611 START TEST json_config 00:04:37.611 ************************************ 00:04:37.611 00:48:44 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config.sh 00:04:37.872 00:48:44 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.872 00:48:44 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.872 00:48:44 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.872 00:48:44 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.872 00:48:44 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.872 00:48:44 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.872 00:48:44 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.872 00:48:44 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.872 00:48:44 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.872 00:48:44 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.872 00:48:44 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.872 00:48:44 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.872 00:48:44 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.872 00:48:44 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.872 00:48:44 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.872 00:48:44 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:37.872 00:48:44 json_config -- scripts/common.sh@345 -- # : 1 00:04:37.872 00:48:44 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.872 00:48:44 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.872 00:48:44 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:37.872 00:48:44 json_config -- scripts/common.sh@353 -- # local d=1 00:04:37.872 00:48:44 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.872 00:48:44 json_config -- scripts/common.sh@355 -- # echo 1 00:04:37.872 00:48:44 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.872 00:48:44 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:37.872 00:48:44 json_config -- scripts/common.sh@353 -- # local d=2 00:04:37.872 00:48:44 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.872 00:48:44 json_config -- scripts/common.sh@355 -- # echo 2 00:04:37.872 00:48:44 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.872 00:48:44 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.872 00:48:44 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.872 00:48:44 json_config -- scripts/common.sh@368 -- # return 0 00:04:37.872 00:48:44 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.872 00:48:44 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.872 --rc genhtml_branch_coverage=1 00:04:37.872 --rc genhtml_function_coverage=1 00:04:37.872 --rc genhtml_legend=1 00:04:37.872 --rc geninfo_all_blocks=1 00:04:37.872 --rc geninfo_unexecuted_blocks=1 00:04:37.872 00:04:37.872 ' 00:04:37.872 00:48:44 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.872 --rc genhtml_branch_coverage=1 00:04:37.872 --rc genhtml_function_coverage=1 00:04:37.872 --rc genhtml_legend=1 00:04:37.872 --rc geninfo_all_blocks=1 00:04:37.872 --rc geninfo_unexecuted_blocks=1 00:04:37.872 00:04:37.872 ' 00:04:37.872 00:48:44 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.872 --rc genhtml_branch_coverage=1 00:04:37.872 --rc genhtml_function_coverage=1 00:04:37.872 --rc genhtml_legend=1 00:04:37.872 --rc geninfo_all_blocks=1 00:04:37.872 --rc geninfo_unexecuted_blocks=1 00:04:37.872 00:04:37.872 ' 00:04:37.872 00:48:44 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.872 --rc genhtml_branch_coverage=1 00:04:37.872 --rc genhtml_function_coverage=1 00:04:37.872 --rc genhtml_legend=1 00:04:37.872 --rc geninfo_all_blocks=1 00:04:37.872 --rc geninfo_unexecuted_blocks=1 00:04:37.872 00:04:37.872 ' 00:04:37.872 00:48:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:04:37.872 00:48:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:37.872 00:48:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.872 00:48:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.872 00:48:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.872 00:48:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.872 00:48:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:04:37.873 00:48:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:37.873 00:48:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.873 00:48:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.873 00:48:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.873 00:48:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.873 00:48:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.873 00:48:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.873 00:48:44 json_config -- paths/export.sh@5 -- # export PATH 00:04:37.873 00:48:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@51 -- # : 0 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:37.873 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:37.873 00:48:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/common.sh 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_initiator_config.json') 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:37.873 INFO: JSON configuration test init 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:37.873 00:48:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.873 00:48:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:37.873 00:48:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.873 00:48:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.873 00:48:44 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:37.873 00:48:44 json_config -- json_config/common.sh@9 -- # local app=target 00:04:37.873 00:48:44 json_config -- json_config/common.sh@10 -- # shift 00:04:37.873 00:48:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.873 00:48:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.873 00:48:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.873 00:48:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.873 00:48:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.873 00:48:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=149399 00:04:37.873 00:48:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.873 Waiting for target to run... 00:04:37.873 00:48:44 json_config -- json_config/common.sh@25 -- # waitforlisten 149399 /var/tmp/spdk_tgt.sock 00:04:37.873 00:48:44 json_config -- common/autotest_common.sh@835 -- # '[' -z 149399 ']' 00:04:37.873 00:48:44 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.873 00:48:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:37.873 00:48:44 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.873 00:48:44 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.873 00:48:44 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.873 00:48:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.873 [2024-11-19 00:48:44.521536] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:37.873 [2024-11-19 00:48:44.521649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149399 ] 00:04:38.442 [2024-11-19 00:48:44.858682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.442 [2024-11-19 00:48:44.954948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.701 00:48:45 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.701 00:48:45 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:38.701 00:48:45 json_config -- json_config/common.sh@26 -- # echo '' 00:04:38.701 00:04:38.701 00:48:45 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:38.701 00:48:45 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:38.701 00:48:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.701 00:48:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.701 00:48:45 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:38.701 00:48:45 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:38.701 00:48:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.701 00:48:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.701 00:48:45 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:38.701 00:48:45 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:38.701 00:48:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:42.894 00:48:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.894 00:48:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:42.894 00:48:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@54 -- # sort 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:42.894 00:48:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.894 00:48:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.894 00:48:49 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:42.895 00:48:49 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:42.895 00:48:49 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:42.895 00:48:49 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:42.895 00:48:49 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:42.895 00:48:49 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:42.895 00:48:49 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:42.895 00:48:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.895 00:48:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.895 00:48:49 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:42.895 00:48:49 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:04:42.895 00:48:49 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:04:42.895 00:48:49 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:04:42.895 00:48:49 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:04:42.895 00:48:49 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:42.895 00:48:49 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:42.895 00:48:49 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:42.895 00:48:49 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:42.895 00:48:49 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:42.895 00:48:49 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:42.895 00:48:49 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:42.895 00:48:49 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:04:42.895 00:48:49 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:42.895 00:48:49 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:04:42.895 00:48:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@320 -- # e810=() 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@321 -- # x722=() 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@322 -- # mlx=() 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:48.167 00:48:54 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:04:48.168 Found 0000:af:00.0 (0x8086 - 0x159b) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:04:48.168 Found 0000:af:00.1 (0x8086 - 0x159b) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@403 -- # (( 0 != 1 )) 00:04:48.168 00:48:54 json_config -- nvmf/common.sh@403 -- # modprobe -r irdma 00:04:48.427 00:48:54 json_config -- nvmf/common.sh@405 -- # modinfo irdma 00:04:48.427 00:48:54 json_config -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:04:48.687 Found net devices under 0000:af:00.0: cvl_0_0 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:04:48.687 Found net devices under 0000:af:00.1: cvl_0_1 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@62 -- # uname 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@108 -- # echo cvl_0_0 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@108 -- # echo cvl_0_1 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@78 -- # ip= 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.8/24 dev cvl_0_0 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@81 -- # ip link set cvl_0_0 up 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:04:48.687 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:04:48.687 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:04:48.687 altname enp175s0f0np0 00:04:48.687 altname ens801f0np0 00:04:48.687 inet 192.168.100.8/24 scope global cvl_0_0 00:04:48.687 valid_lft forever preferred_lft forever 00:04:48.687 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:04:48.687 valid_lft forever preferred_lft forever 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@78 -- # ip= 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.9/24 dev cvl_0_1 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@81 -- # ip link set cvl_0_1 up 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:04:48.687 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:04:48.687 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:04:48.687 altname enp175s0f1np1 00:04:48.687 altname ens801f1np1 00:04:48.687 inet 192.168.100.9/24 scope global cvl_0_1 00:04:48.687 valid_lft forever preferred_lft forever 00:04:48.687 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:04:48.687 valid_lft forever preferred_lft forever 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@450 -- # return 0 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:04:48.687 00:48:55 json_config -- nvmf/common.sh@108 -- # echo cvl_0_0 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@108 -- # echo cvl_0_1 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:04:48.688 192.168.100.9' 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:04:48.688 192.168.100.9' 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@485 -- # head -n 1 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:04:48.688 192.168.100.9' 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@486 -- # head -n 1 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:04:48.688 00:48:55 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:04:48.947 00:48:55 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:04:48.947 00:48:55 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:48.947 00:48:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:48.947 MallocForNvmf0 00:04:48.947 00:48:55 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:48.947 00:48:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:49.206 MallocForNvmf1 00:04:49.206 00:48:55 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:49.206 00:48:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:49.465 [2024-11-19 00:48:55.970731] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:49.465 [2024-11-19 00:48:56.025450] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000295c0/0x617000008340) succeed. 00:04:49.466 [2024-11-19 00:48:56.035790] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029740/0x6170000086c0) succeed. 00:04:49.466 [2024-11-19 00:48:56.035815] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:04:49.466 [2024-11-19 00:48:56.038609] iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/3071 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:04:49.466 [2024-11-19 00:48:56.038632] iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:49.466 [2024-11-19 00:48:56.040879] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:04:49.466 00:48:56 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:49.466 00:48:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:49.725 00:48:56 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:49.725 00:48:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:49.984 00:48:56 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:49.984 00:48:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:49.984 00:48:56 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:49.984 00:48:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:50.243 [2024-11-19 00:48:56.807351] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:50.243 00:48:56 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:50.243 00:48:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.243 00:48:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.243 00:48:56 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:50.243 00:48:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.243 00:48:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.243 00:48:56 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:50.243 00:48:56 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:50.243 00:48:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:50.501 MallocBdevForConfigChangeCheck 00:04:50.501 00:48:57 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:50.501 00:48:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.501 00:48:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.501 00:48:57 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:50.501 00:48:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.069 00:48:57 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:51.069 INFO: shutting down applications... 00:04:51.069 00:48:57 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:51.069 00:48:57 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:51.069 00:48:57 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:51.069 00:48:57 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:52.446 Calling clear_iscsi_subsystem 00:04:52.446 Calling clear_nvmf_subsystem 00:04:52.446 Calling clear_nbd_subsystem 00:04:52.446 Calling clear_ublk_subsystem 00:04:52.446 Calling clear_vhost_blk_subsystem 00:04:52.446 Calling clear_vhost_scsi_subsystem 00:04:52.446 Calling clear_bdev_subsystem 00:04:52.446 00:48:59 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py 00:04:52.446 00:48:59 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:52.446 00:48:59 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:52.446 00:48:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:52.446 00:48:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:52.446 00:48:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:53.013 00:48:59 json_config -- json_config/json_config.sh@352 -- # break 00:04:53.013 00:48:59 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:53.013 00:48:59 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:53.013 00:48:59 json_config -- json_config/common.sh@31 -- # local app=target 00:04:53.013 00:48:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.013 00:48:59 json_config -- json_config/common.sh@35 -- # [[ -n 149399 ]] 00:04:53.013 00:48:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 149399 00:04:53.013 00:48:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.013 00:48:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.013 00:48:59 json_config -- json_config/common.sh@41 -- # kill -0 149399 00:04:53.013 00:48:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.271 00:48:59 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.272 00:48:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.272 00:48:59 json_config -- json_config/common.sh@41 -- # kill -0 149399 00:04:53.272 00:48:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.840 00:49:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.840 00:49:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.840 00:49:00 json_config -- json_config/common.sh@41 -- # kill -0 149399 00:04:53.840 00:49:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.840 00:49:00 json_config -- json_config/common.sh@43 -- # break 00:04:53.840 00:49:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.840 00:49:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.840 SPDK target shutdown done 00:04:53.840 00:49:00 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:53.840 INFO: relaunching applications... 00:04:53.840 00:49:00 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.840 00:49:00 json_config -- json_config/common.sh@9 -- # local app=target 00:04:53.840 00:49:00 json_config -- json_config/common.sh@10 -- # shift 00:04:53.840 00:49:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.840 00:49:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.840 00:49:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.840 00:49:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.840 00:49:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.840 00:49:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=154415 00:04:53.840 00:49:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.840 00:49:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.840 Waiting for target to run... 00:04:53.840 00:49:00 json_config -- json_config/common.sh@25 -- # waitforlisten 154415 /var/tmp/spdk_tgt.sock 00:04:53.840 00:49:00 json_config -- common/autotest_common.sh@835 -- # '[' -z 154415 ']' 00:04:53.840 00:49:00 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.840 00:49:00 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.840 00:49:00 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.840 00:49:00 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.840 00:49:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.840 [2024-11-19 00:49:00.517713] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:53.840 [2024-11-19 00:49:00.517805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154415 ] 00:04:54.408 [2024-11-19 00:49:01.028750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.667 [2024-11-19 00:49:01.134017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.857 [2024-11-19 00:49:04.788002] transport.c: 288:nvmf_transport_create: *WARNING*: The num_shared_buffers value (4095) is larger than the available iobuf pool size (1024). Please increase the iobuf pool sizes. 00:04:58.857 [2024-11-19 00:49:04.805924] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000029d40/0x617000008a40) succeed. 00:04:58.857 [2024-11-19 00:49:04.816541] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029ec0/0x617000008dc0) succeed. 00:04:58.857 [2024-11-19 00:49:04.819363] iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/3071 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:04:58.857 [2024-11-19 00:49:04.819395] iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:58.857 [2024-11-19 00:49:04.821734] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:04:58.857 [2024-11-19 00:49:04.850043] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:58.857 00:49:04 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.857 00:49:04 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:58.857 00:49:04 json_config -- json_config/common.sh@26 -- # echo '' 00:04:58.857 00:04:58.857 00:49:04 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:58.857 00:49:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:58.857 INFO: Checking if target configuration is the same... 00:04:58.857 00:49:04 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.857 00:49:04 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:58.857 00:49:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.857 + '[' 2 -ne 2 ']' 00:04:58.857 +++ dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:58.857 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/../.. 00:04:58.857 + rootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:04:58.857 +++ basename /dev/fd/62 00:04:58.857 ++ mktemp /tmp/62.XXX 00:04:58.857 + tmp_file_1=/tmp/62.GqN 00:04:58.857 +++ basename /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.857 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:58.857 + tmp_file_2=/tmp/spdk_tgt_config.json.vKq 00:04:58.857 + ret=0 00:04:58.857 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.857 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.857 + diff -u /tmp/62.GqN /tmp/spdk_tgt_config.json.vKq 00:04:58.857 + echo 'INFO: JSON config files are the same' 00:04:58.857 INFO: JSON config files are the same 00:04:58.857 + rm /tmp/62.GqN /tmp/spdk_tgt_config.json.vKq 00:04:58.857 + exit 0 00:04:58.857 00:49:05 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:58.857 00:49:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:58.857 INFO: changing configuration and checking if this can be detected... 00:04:58.857 00:49:05 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.857 00:49:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.857 00:49:05 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:58.857 00:49:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.857 00:49:05 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.857 + '[' 2 -ne 2 ']' 00:04:58.857 +++ dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:58.857 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/../.. 00:04:58.857 + rootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:04:58.857 +++ basename /dev/fd/62 00:04:58.857 ++ mktemp /tmp/62.XXX 00:04:58.857 + tmp_file_1=/tmp/62.Luh 00:04:58.857 +++ basename /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.857 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:58.857 + tmp_file_2=/tmp/spdk_tgt_config.json.tbz 00:04:58.858 + ret=0 00:04:58.858 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:59.424 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:59.424 + diff -u /tmp/62.Luh /tmp/spdk_tgt_config.json.tbz 00:04:59.424 + ret=1 00:04:59.424 + echo '=== Start of file: /tmp/62.Luh ===' 00:04:59.424 + cat /tmp/62.Luh 00:04:59.424 + echo '=== End of file: /tmp/62.Luh ===' 00:04:59.424 + echo '' 00:04:59.424 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tbz ===' 00:04:59.424 + cat /tmp/spdk_tgt_config.json.tbz 00:04:59.424 + echo '=== End of file: /tmp/spdk_tgt_config.json.tbz ===' 00:04:59.424 + echo '' 00:04:59.424 + rm /tmp/62.Luh /tmp/spdk_tgt_config.json.tbz 00:04:59.424 + exit 1 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:59.424 INFO: configuration change detected. 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:59.424 00:49:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.424 00:49:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@324 -- # [[ -n 154415 ]] 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:59.424 00:49:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.424 00:49:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:59.424 00:49:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.424 00:49:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.424 00:49:05 json_config -- json_config/json_config.sh@330 -- # killprocess 154415 00:04:59.424 00:49:05 json_config -- common/autotest_common.sh@954 -- # '[' -z 154415 ']' 00:04:59.424 00:49:05 json_config -- common/autotest_common.sh@958 -- # kill -0 154415 00:04:59.424 00:49:05 json_config -- common/autotest_common.sh@959 -- # uname 00:04:59.424 00:49:05 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.424 00:49:05 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154415 00:04:59.424 00:49:06 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.424 00:49:06 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.424 00:49:06 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154415' 00:04:59.424 killing process with pid 154415 00:04:59.424 00:49:06 json_config -- common/autotest_common.sh@973 -- # kill 154415 00:04:59.424 00:49:06 json_config -- common/autotest_common.sh@978 -- # wait 154415 00:05:01.962 00:49:08 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.962 00:49:08 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:01.962 00:49:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.962 00:49:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.962 00:49:08 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:01.962 00:49:08 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:01.962 INFO: Success 00:05:01.962 00:49:08 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:01.962 00:49:08 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:01.962 00:49:08 json_config -- nvmf/common.sh@121 -- # sync 00:05:01.962 00:49:08 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:05:01.962 00:49:08 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:05:01.962 00:49:08 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:05:01.962 00:49:08 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:01.962 00:49:08 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:05:01.962 00:05:01.962 real 0m24.074s 00:05:01.962 user 0m26.270s 00:05:01.962 sys 0m7.354s 00:05:01.962 00:49:08 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.962 00:49:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.962 ************************************ 00:05:01.962 END TEST json_config 00:05:01.962 ************************************ 00:05:01.962 00:49:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:01.962 00:49:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.962 00:49:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.962 00:49:08 -- common/autotest_common.sh@10 -- # set +x 00:05:01.962 ************************************ 00:05:01.962 START TEST json_config_extra_key 00:05:01.962 ************************************ 00:05:01.962 00:49:08 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:01.962 00:49:08 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.962 00:49:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.962 00:49:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.962 00:49:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:01.962 00:49:08 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.962 00:49:08 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.962 --rc genhtml_branch_coverage=1 00:05:01.962 --rc genhtml_function_coverage=1 00:05:01.962 --rc genhtml_legend=1 00:05:01.962 --rc geninfo_all_blocks=1 00:05:01.962 --rc geninfo_unexecuted_blocks=1 00:05:01.962 00:05:01.962 ' 00:05:01.962 00:49:08 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.962 --rc genhtml_branch_coverage=1 00:05:01.962 --rc genhtml_function_coverage=1 00:05:01.962 --rc genhtml_legend=1 00:05:01.962 --rc geninfo_all_blocks=1 00:05:01.962 --rc geninfo_unexecuted_blocks=1 00:05:01.962 00:05:01.962 ' 00:05:01.962 00:49:08 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.962 --rc genhtml_branch_coverage=1 00:05:01.962 --rc genhtml_function_coverage=1 00:05:01.962 --rc genhtml_legend=1 00:05:01.962 --rc geninfo_all_blocks=1 00:05:01.962 --rc geninfo_unexecuted_blocks=1 00:05:01.962 00:05:01.962 ' 00:05:01.962 00:49:08 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.962 --rc genhtml_branch_coverage=1 00:05:01.962 --rc genhtml_function_coverage=1 00:05:01.962 --rc genhtml_legend=1 00:05:01.962 --rc geninfo_all_blocks=1 00:05:01.962 --rc geninfo_unexecuted_blocks=1 00:05:01.962 00:05:01.962 ' 00:05:01.962 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.962 00:49:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.962 00:49:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.962 00:49:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.962 00:49:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.962 00:49:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.963 00:49:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:01.963 00:49:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.963 00:49:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:01.963 00:49:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.963 00:49:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.963 00:49:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.963 00:49:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.963 00:49:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.963 00:49:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.963 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.963 00:49:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.963 00:49:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.963 00:49:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/common.sh 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:01.963 INFO: launching applications... 00:05:01.963 00:49:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json 00:05:01.963 00:49:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:01.963 00:49:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:01.963 00:49:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.963 00:49:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.963 00:49:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.963 00:49:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.963 00:49:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.963 00:49:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=155902 00:05:01.963 00:49:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.963 Waiting for target to run... 00:05:01.963 00:49:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 155902 /var/tmp/spdk_tgt.sock 00:05:01.963 00:49:08 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 155902 ']' 00:05:01.963 00:49:08 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json 00:05:01.963 00:49:08 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.963 00:49:08 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.963 00:49:08 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.963 00:49:08 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.963 00:49:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:02.222 [2024-11-19 00:49:08.656562] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:02.222 [2024-11-19 00:49:08.656691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155902 ] 00:05:02.482 [2024-11-19 00:49:09.003277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.482 [2024-11-19 00:49:09.101237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.418 00:49:09 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.418 00:49:09 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:03.418 00:49:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:03.418 00:05:03.418 00:49:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:03.418 INFO: shutting down applications... 00:05:03.418 00:49:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:03.418 00:49:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:03.418 00:49:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:03.418 00:49:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 155902 ]] 00:05:03.418 00:49:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 155902 00:05:03.418 00:49:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:03.418 00:49:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.418 00:49:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 155902 00:05:03.418 00:49:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.677 00:49:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.677 00:49:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.677 00:49:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 155902 00:05:03.677 00:49:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.244 00:49:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.245 00:49:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.245 00:49:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 155902 00:05:04.245 00:49:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.812 00:49:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.812 00:49:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.812 00:49:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 155902 00:05:04.812 00:49:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.380 00:49:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.380 00:49:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.380 00:49:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 155902 00:05:05.380 00:49:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.639 00:49:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.639 00:49:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.639 00:49:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 155902 00:05:05.639 00:49:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.209 00:49:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.209 00:49:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.209 00:49:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 155902 00:05:06.209 00:49:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:06.209 00:49:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:06.209 00:49:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:06.209 00:49:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:06.209 SPDK target shutdown done 00:05:06.209 00:49:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:06.209 Success 00:05:06.209 00:05:06.209 real 0m4.443s 00:05:06.209 user 0m3.901s 00:05:06.209 sys 0m0.545s 00:05:06.209 00:49:12 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.209 00:49:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:06.209 ************************************ 00:05:06.209 END TEST json_config_extra_key 00:05:06.209 ************************************ 00:05:06.209 00:49:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.209 00:49:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.209 00:49:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.209 00:49:12 -- common/autotest_common.sh@10 -- # set +x 00:05:06.209 ************************************ 00:05:06.209 START TEST alias_rpc 00:05:06.209 ************************************ 00:05:06.209 00:49:12 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.470 * Looking for test storage... 00:05:06.470 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc 00:05:06.470 00:49:12 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.470 00:49:12 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.470 00:49:12 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.470 00:49:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.470 --rc genhtml_branch_coverage=1 00:05:06.470 --rc genhtml_function_coverage=1 00:05:06.470 --rc genhtml_legend=1 00:05:06.470 --rc geninfo_all_blocks=1 00:05:06.470 --rc geninfo_unexecuted_blocks=1 00:05:06.470 00:05:06.470 ' 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.470 --rc genhtml_branch_coverage=1 00:05:06.470 --rc genhtml_function_coverage=1 00:05:06.470 --rc genhtml_legend=1 00:05:06.470 --rc geninfo_all_blocks=1 00:05:06.470 --rc geninfo_unexecuted_blocks=1 00:05:06.470 00:05:06.470 ' 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.470 --rc genhtml_branch_coverage=1 00:05:06.470 --rc genhtml_function_coverage=1 00:05:06.470 --rc genhtml_legend=1 00:05:06.470 --rc geninfo_all_blocks=1 00:05:06.470 --rc geninfo_unexecuted_blocks=1 00:05:06.470 00:05:06.470 ' 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.470 --rc genhtml_branch_coverage=1 00:05:06.470 --rc genhtml_function_coverage=1 00:05:06.470 --rc genhtml_legend=1 00:05:06.470 --rc geninfo_all_blocks=1 00:05:06.470 --rc geninfo_unexecuted_blocks=1 00:05:06.470 00:05:06.470 ' 00:05:06.470 00:49:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:06.470 00:49:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=156654 00:05:06.470 00:49:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.470 00:49:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 156654 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 156654 ']' 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.470 00:49:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.470 [2024-11-19 00:49:13.155650] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:06.470 [2024-11-19 00:49:13.155741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156654 ] 00:05:06.729 [2024-11-19 00:49:13.279639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.729 [2024-11-19 00:49:13.385337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.668 00:49:14 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.668 00:49:14 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.668 00:49:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:07.927 00:49:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 156654 00:05:07.927 00:49:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 156654 ']' 00:05:07.928 00:49:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 156654 00:05:07.928 00:49:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:07.928 00:49:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.928 00:49:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 156654 00:05:07.928 00:49:14 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.928 00:49:14 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.928 00:49:14 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 156654' 00:05:07.928 killing process with pid 156654 00:05:07.928 00:49:14 alias_rpc -- common/autotest_common.sh@973 -- # kill 156654 00:05:07.928 00:49:14 alias_rpc -- common/autotest_common.sh@978 -- # wait 156654 00:05:10.465 00:05:10.465 real 0m3.878s 00:05:10.465 user 0m3.903s 00:05:10.465 sys 0m0.568s 00:05:10.465 00:49:16 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.465 00:49:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.465 ************************************ 00:05:10.465 END TEST alias_rpc 00:05:10.465 ************************************ 00:05:10.465 00:49:16 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:10.465 00:49:16 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.465 00:49:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.465 00:49:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.465 00:49:16 -- common/autotest_common.sh@10 -- # set +x 00:05:10.465 ************************************ 00:05:10.465 START TEST spdkcli_tcp 00:05:10.465 ************************************ 00:05:10.465 00:49:16 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.465 * Looking for test storage... 00:05:10.465 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli 00:05:10.465 00:49:16 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.465 00:49:16 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.465 00:49:16 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.465 00:49:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.465 00:49:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:10.465 00:49:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.465 00:49:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.465 --rc genhtml_branch_coverage=1 00:05:10.465 --rc genhtml_function_coverage=1 00:05:10.465 --rc genhtml_legend=1 00:05:10.465 --rc geninfo_all_blocks=1 00:05:10.465 --rc geninfo_unexecuted_blocks=1 00:05:10.465 00:05:10.465 ' 00:05:10.466 00:49:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.466 --rc genhtml_branch_coverage=1 00:05:10.466 --rc genhtml_function_coverage=1 00:05:10.466 --rc genhtml_legend=1 00:05:10.466 --rc geninfo_all_blocks=1 00:05:10.466 --rc geninfo_unexecuted_blocks=1 00:05:10.466 00:05:10.466 ' 00:05:10.466 00:49:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.466 --rc genhtml_branch_coverage=1 00:05:10.466 --rc genhtml_function_coverage=1 00:05:10.466 --rc genhtml_legend=1 00:05:10.466 --rc geninfo_all_blocks=1 00:05:10.466 --rc geninfo_unexecuted_blocks=1 00:05:10.466 00:05:10.466 ' 00:05:10.466 00:49:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.466 --rc genhtml_branch_coverage=1 00:05:10.466 --rc genhtml_function_coverage=1 00:05:10.466 --rc genhtml_legend=1 00:05:10.466 --rc geninfo_all_blocks=1 00:05:10.466 --rc geninfo_unexecuted_blocks=1 00:05:10.466 00:05:10.466 ' 00:05:10.466 00:49:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/common.sh 00:05:10.466 00:49:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:10.466 00:49:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py 00:05:10.466 00:49:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:10.466 00:49:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:10.466 00:49:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:10.466 00:49:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:10.466 00:49:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.466 00:49:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.466 00:49:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=157395 00:05:10.466 00:49:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.466 00:49:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 157395 00:05:10.466 00:49:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 157395 ']' 00:05:10.466 00:49:17 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.466 00:49:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.466 00:49:17 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.466 00:49:17 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.466 00:49:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.466 [2024-11-19 00:49:17.119018] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:10.466 [2024-11-19 00:49:17.119111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157395 ] 00:05:10.725 [2024-11-19 00:49:17.243413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.725 [2024-11-19 00:49:17.352073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.725 [2024-11-19 00:49:17.352093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.661 00:49:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.661 00:49:18 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:11.661 00:49:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=157623 00:05:11.661 00:49:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.661 00:49:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.920 [ 00:05:11.920 "bdev_malloc_delete", 00:05:11.920 "bdev_malloc_create", 00:05:11.920 "bdev_null_resize", 00:05:11.920 "bdev_null_delete", 00:05:11.921 "bdev_null_create", 00:05:11.921 "bdev_nvme_cuse_unregister", 00:05:11.921 "bdev_nvme_cuse_register", 00:05:11.921 "bdev_opal_new_user", 00:05:11.921 "bdev_opal_set_lock_state", 00:05:11.921 "bdev_opal_delete", 00:05:11.921 "bdev_opal_get_info", 00:05:11.921 "bdev_opal_create", 00:05:11.921 "bdev_nvme_opal_revert", 00:05:11.921 "bdev_nvme_opal_init", 00:05:11.921 "bdev_nvme_send_cmd", 00:05:11.921 "bdev_nvme_set_keys", 00:05:11.921 "bdev_nvme_get_path_iostat", 00:05:11.921 "bdev_nvme_get_mdns_discovery_info", 00:05:11.921 "bdev_nvme_stop_mdns_discovery", 00:05:11.921 "bdev_nvme_start_mdns_discovery", 00:05:11.921 "bdev_nvme_set_multipath_policy", 00:05:11.921 "bdev_nvme_set_preferred_path", 00:05:11.921 "bdev_nvme_get_io_paths", 00:05:11.921 "bdev_nvme_remove_error_injection", 00:05:11.921 "bdev_nvme_add_error_injection", 00:05:11.921 "bdev_nvme_get_discovery_info", 00:05:11.921 "bdev_nvme_stop_discovery", 00:05:11.921 "bdev_nvme_start_discovery", 00:05:11.921 "bdev_nvme_get_controller_health_info", 00:05:11.921 "bdev_nvme_disable_controller", 00:05:11.921 "bdev_nvme_enable_controller", 00:05:11.921 "bdev_nvme_reset_controller", 00:05:11.921 "bdev_nvme_get_transport_statistics", 00:05:11.921 "bdev_nvme_apply_firmware", 00:05:11.921 "bdev_nvme_detach_controller", 00:05:11.921 "bdev_nvme_get_controllers", 00:05:11.921 "bdev_nvme_attach_controller", 00:05:11.921 "bdev_nvme_set_hotplug", 00:05:11.921 "bdev_nvme_set_options", 00:05:11.921 "bdev_passthru_delete", 00:05:11.921 "bdev_passthru_create", 00:05:11.921 "bdev_lvol_set_parent_bdev", 00:05:11.921 "bdev_lvol_set_parent", 00:05:11.921 "bdev_lvol_check_shallow_copy", 00:05:11.921 "bdev_lvol_start_shallow_copy", 00:05:11.921 "bdev_lvol_grow_lvstore", 00:05:11.921 "bdev_lvol_get_lvols", 00:05:11.921 "bdev_lvol_get_lvstores", 00:05:11.921 "bdev_lvol_delete", 00:05:11.921 "bdev_lvol_set_read_only", 00:05:11.921 "bdev_lvol_resize", 00:05:11.921 "bdev_lvol_decouple_parent", 00:05:11.921 "bdev_lvol_inflate", 00:05:11.921 "bdev_lvol_rename", 00:05:11.921 "bdev_lvol_clone_bdev", 00:05:11.921 "bdev_lvol_clone", 00:05:11.921 "bdev_lvol_snapshot", 00:05:11.921 "bdev_lvol_create", 00:05:11.921 "bdev_lvol_delete_lvstore", 00:05:11.921 "bdev_lvol_rename_lvstore", 00:05:11.921 "bdev_lvol_create_lvstore", 00:05:11.921 "bdev_raid_set_options", 00:05:11.921 "bdev_raid_remove_base_bdev", 00:05:11.921 "bdev_raid_add_base_bdev", 00:05:11.921 "bdev_raid_delete", 00:05:11.921 "bdev_raid_create", 00:05:11.921 "bdev_raid_get_bdevs", 00:05:11.921 "bdev_error_inject_error", 00:05:11.921 "bdev_error_delete", 00:05:11.921 "bdev_error_create", 00:05:11.921 "bdev_split_delete", 00:05:11.921 "bdev_split_create", 00:05:11.921 "bdev_delay_delete", 00:05:11.921 "bdev_delay_create", 00:05:11.921 "bdev_delay_update_latency", 00:05:11.921 "bdev_zone_block_delete", 00:05:11.921 "bdev_zone_block_create", 00:05:11.921 "blobfs_create", 00:05:11.921 "blobfs_detect", 00:05:11.921 "blobfs_set_cache_size", 00:05:11.921 "bdev_aio_delete", 00:05:11.921 "bdev_aio_rescan", 00:05:11.921 "bdev_aio_create", 00:05:11.921 "bdev_ftl_set_property", 00:05:11.921 "bdev_ftl_get_properties", 00:05:11.921 "bdev_ftl_get_stats", 00:05:11.921 "bdev_ftl_unmap", 00:05:11.921 "bdev_ftl_unload", 00:05:11.921 "bdev_ftl_delete", 00:05:11.921 "bdev_ftl_load", 00:05:11.921 "bdev_ftl_create", 00:05:11.921 "bdev_virtio_attach_controller", 00:05:11.921 "bdev_virtio_scsi_get_devices", 00:05:11.921 "bdev_virtio_detach_controller", 00:05:11.921 "bdev_virtio_blk_set_hotplug", 00:05:11.921 "bdev_iscsi_delete", 00:05:11.921 "bdev_iscsi_create", 00:05:11.921 "bdev_iscsi_set_options", 00:05:11.921 "accel_error_inject_error", 00:05:11.921 "ioat_scan_accel_module", 00:05:11.921 "dsa_scan_accel_module", 00:05:11.921 "iaa_scan_accel_module", 00:05:11.921 "keyring_file_remove_key", 00:05:11.921 "keyring_file_add_key", 00:05:11.921 "keyring_linux_set_options", 00:05:11.921 "fsdev_aio_delete", 00:05:11.921 "fsdev_aio_create", 00:05:11.921 "iscsi_get_histogram", 00:05:11.921 "iscsi_enable_histogram", 00:05:11.921 "iscsi_set_options", 00:05:11.921 "iscsi_get_auth_groups", 00:05:11.921 "iscsi_auth_group_remove_secret", 00:05:11.921 "iscsi_auth_group_add_secret", 00:05:11.921 "iscsi_delete_auth_group", 00:05:11.921 "iscsi_create_auth_group", 00:05:11.921 "iscsi_set_discovery_auth", 00:05:11.921 "iscsi_get_options", 00:05:11.921 "iscsi_target_node_request_logout", 00:05:11.921 "iscsi_target_node_set_redirect", 00:05:11.921 "iscsi_target_node_set_auth", 00:05:11.921 "iscsi_target_node_add_lun", 00:05:11.921 "iscsi_get_stats", 00:05:11.921 "iscsi_get_connections", 00:05:11.921 "iscsi_portal_group_set_auth", 00:05:11.921 "iscsi_start_portal_group", 00:05:11.921 "iscsi_delete_portal_group", 00:05:11.921 "iscsi_create_portal_group", 00:05:11.921 "iscsi_get_portal_groups", 00:05:11.921 "iscsi_delete_target_node", 00:05:11.921 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.921 "iscsi_target_node_add_pg_ig_maps", 00:05:11.921 "iscsi_create_target_node", 00:05:11.921 "iscsi_get_target_nodes", 00:05:11.921 "iscsi_delete_initiator_group", 00:05:11.921 "iscsi_initiator_group_remove_initiators", 00:05:11.921 "iscsi_initiator_group_add_initiators", 00:05:11.921 "iscsi_create_initiator_group", 00:05:11.921 "iscsi_get_initiator_groups", 00:05:11.921 "nvmf_set_crdt", 00:05:11.921 "nvmf_set_config", 00:05:11.921 "nvmf_set_max_subsystems", 00:05:11.921 "nvmf_stop_mdns_prr", 00:05:11.921 "nvmf_publish_mdns_prr", 00:05:11.921 "nvmf_subsystem_get_listeners", 00:05:11.921 "nvmf_subsystem_get_qpairs", 00:05:11.921 "nvmf_subsystem_get_controllers", 00:05:11.921 "nvmf_get_stats", 00:05:11.921 "nvmf_get_transports", 00:05:11.921 "nvmf_create_transport", 00:05:11.921 "nvmf_get_targets", 00:05:11.921 "nvmf_delete_target", 00:05:11.921 "nvmf_create_target", 00:05:11.921 "nvmf_subsystem_allow_any_host", 00:05:11.921 "nvmf_subsystem_set_keys", 00:05:11.921 "nvmf_subsystem_remove_host", 00:05:11.921 "nvmf_subsystem_add_host", 00:05:11.921 "nvmf_ns_remove_host", 00:05:11.921 "nvmf_ns_add_host", 00:05:11.921 "nvmf_subsystem_remove_ns", 00:05:11.921 "nvmf_subsystem_set_ns_ana_group", 00:05:11.921 "nvmf_subsystem_add_ns", 00:05:11.921 "nvmf_subsystem_listener_set_ana_state", 00:05:11.921 "nvmf_discovery_get_referrals", 00:05:11.921 "nvmf_discovery_remove_referral", 00:05:11.921 "nvmf_discovery_add_referral", 00:05:11.921 "nvmf_subsystem_remove_listener", 00:05:11.921 "nvmf_subsystem_add_listener", 00:05:11.921 "nvmf_delete_subsystem", 00:05:11.921 "nvmf_create_subsystem", 00:05:11.921 "nvmf_get_subsystems", 00:05:11.921 "env_dpdk_get_mem_stats", 00:05:11.921 "nbd_get_disks", 00:05:11.921 "nbd_stop_disk", 00:05:11.921 "nbd_start_disk", 00:05:11.921 "ublk_recover_disk", 00:05:11.921 "ublk_get_disks", 00:05:11.921 "ublk_stop_disk", 00:05:11.921 "ublk_start_disk", 00:05:11.921 "ublk_destroy_target", 00:05:11.921 "ublk_create_target", 00:05:11.921 "virtio_blk_create_transport", 00:05:11.921 "virtio_blk_get_transports", 00:05:11.921 "vhost_controller_set_coalescing", 00:05:11.921 "vhost_get_controllers", 00:05:11.921 "vhost_delete_controller", 00:05:11.921 "vhost_create_blk_controller", 00:05:11.921 "vhost_scsi_controller_remove_target", 00:05:11.921 "vhost_scsi_controller_add_target", 00:05:11.921 "vhost_start_scsi_controller", 00:05:11.921 "vhost_create_scsi_controller", 00:05:11.921 "thread_set_cpumask", 00:05:11.921 "scheduler_set_options", 00:05:11.921 "framework_get_governor", 00:05:11.921 "framework_get_scheduler", 00:05:11.921 "framework_set_scheduler", 00:05:11.921 "framework_get_reactors", 00:05:11.921 "thread_get_io_channels", 00:05:11.921 "thread_get_pollers", 00:05:11.921 "thread_get_stats", 00:05:11.921 "framework_monitor_context_switch", 00:05:11.921 "spdk_kill_instance", 00:05:11.921 "log_enable_timestamps", 00:05:11.921 "log_get_flags", 00:05:11.921 "log_clear_flag", 00:05:11.921 "log_set_flag", 00:05:11.921 "log_get_level", 00:05:11.921 "log_set_level", 00:05:11.921 "log_get_print_level", 00:05:11.921 "log_set_print_level", 00:05:11.921 "framework_enable_cpumask_locks", 00:05:11.921 "framework_disable_cpumask_locks", 00:05:11.921 "framework_wait_init", 00:05:11.921 "framework_start_init", 00:05:11.921 "scsi_get_devices", 00:05:11.921 "bdev_get_histogram", 00:05:11.921 "bdev_enable_histogram", 00:05:11.921 "bdev_set_qos_limit", 00:05:11.921 "bdev_set_qd_sampling_period", 00:05:11.921 "bdev_get_bdevs", 00:05:11.921 "bdev_reset_iostat", 00:05:11.921 "bdev_get_iostat", 00:05:11.921 "bdev_examine", 00:05:11.921 "bdev_wait_for_examine", 00:05:11.921 "bdev_set_options", 00:05:11.921 "accel_get_stats", 00:05:11.921 "accel_set_options", 00:05:11.921 "accel_set_driver", 00:05:11.921 "accel_crypto_key_destroy", 00:05:11.921 "accel_crypto_keys_get", 00:05:11.921 "accel_crypto_key_create", 00:05:11.921 "accel_assign_opc", 00:05:11.921 "accel_get_module_info", 00:05:11.921 "accel_get_opc_assignments", 00:05:11.921 "vmd_rescan", 00:05:11.921 "vmd_remove_device", 00:05:11.921 "vmd_enable", 00:05:11.921 "sock_get_default_impl", 00:05:11.922 "sock_set_default_impl", 00:05:11.922 "sock_impl_set_options", 00:05:11.922 "sock_impl_get_options", 00:05:11.922 "iobuf_get_stats", 00:05:11.922 "iobuf_set_options", 00:05:11.922 "keyring_get_keys", 00:05:11.922 "framework_get_pci_devices", 00:05:11.922 "framework_get_config", 00:05:11.922 "framework_get_subsystems", 00:05:11.922 "fsdev_set_opts", 00:05:11.922 "fsdev_get_opts", 00:05:11.922 "trace_get_info", 00:05:11.922 "trace_get_tpoint_group_mask", 00:05:11.922 "trace_disable_tpoint_group", 00:05:11.922 "trace_enable_tpoint_group", 00:05:11.922 "trace_clear_tpoint_mask", 00:05:11.922 "trace_set_tpoint_mask", 00:05:11.922 "notify_get_notifications", 00:05:11.922 "notify_get_types", 00:05:11.922 "spdk_get_version", 00:05:11.922 "rpc_get_methods" 00:05:11.922 ] 00:05:11.922 00:49:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.922 00:49:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.922 00:49:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 157395 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 157395 ']' 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 157395 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157395 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157395' 00:05:11.922 killing process with pid 157395 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 157395 00:05:11.922 00:49:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 157395 00:05:14.458 00:05:14.458 real 0m4.005s 00:05:14.458 user 0m7.296s 00:05:14.458 sys 0m0.598s 00:05:14.458 00:49:20 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.458 00:49:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.458 ************************************ 00:05:14.458 END TEST spdkcli_tcp 00:05:14.458 ************************************ 00:05:14.458 00:49:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.458 00:49:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.458 00:49:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.458 00:49:20 -- common/autotest_common.sh@10 -- # set +x 00:05:14.458 ************************************ 00:05:14.458 START TEST dpdk_mem_utility 00:05:14.458 ************************************ 00:05:14.458 00:49:20 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.458 * Looking for test storage... 00:05:14.458 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.458 00:49:21 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.458 --rc genhtml_branch_coverage=1 00:05:14.458 --rc genhtml_function_coverage=1 00:05:14.458 --rc genhtml_legend=1 00:05:14.458 --rc geninfo_all_blocks=1 00:05:14.458 --rc geninfo_unexecuted_blocks=1 00:05:14.458 00:05:14.458 ' 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.458 --rc genhtml_branch_coverage=1 00:05:14.458 --rc genhtml_function_coverage=1 00:05:14.458 --rc genhtml_legend=1 00:05:14.458 --rc geninfo_all_blocks=1 00:05:14.458 --rc geninfo_unexecuted_blocks=1 00:05:14.458 00:05:14.458 ' 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.458 --rc genhtml_branch_coverage=1 00:05:14.458 --rc genhtml_function_coverage=1 00:05:14.458 --rc genhtml_legend=1 00:05:14.458 --rc geninfo_all_blocks=1 00:05:14.458 --rc geninfo_unexecuted_blocks=1 00:05:14.458 00:05:14.458 ' 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.458 --rc genhtml_branch_coverage=1 00:05:14.458 --rc genhtml_function_coverage=1 00:05:14.458 --rc genhtml_legend=1 00:05:14.458 --rc geninfo_all_blocks=1 00:05:14.458 --rc geninfo_unexecuted_blocks=1 00:05:14.458 00:05:14.458 ' 00:05:14.458 00:49:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:14.458 00:49:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=158157 00:05:14.458 00:49:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 158157 00:05:14.458 00:49:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 158157 ']' 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.458 00:49:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.718 [2024-11-19 00:49:21.170407] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:14.718 [2024-11-19 00:49:21.170493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158157 ] 00:05:14.718 [2024-11-19 00:49:21.291501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.718 [2024-11-19 00:49:21.393572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.658 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.658 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:15.658 00:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:15.658 00:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:15.658 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.658 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.658 { 00:05:15.658 "filename": "/tmp/spdk_mem_dump.txt" 00:05:15.658 } 00:05:15.658 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.658 00:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:15.658 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:15.658 1 heaps totaling size 816.000000 MiB 00:05:15.658 size: 816.000000 MiB heap id: 0 00:05:15.658 end heaps---------- 00:05:15.658 9 mempools totaling size 595.772034 MiB 00:05:15.658 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:15.658 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:15.658 size: 92.545471 MiB name: bdev_io_158157 00:05:15.658 size: 50.003479 MiB name: msgpool_158157 00:05:15.658 size: 36.509338 MiB name: fsdev_io_158157 00:05:15.658 size: 21.763794 MiB name: PDU_Pool 00:05:15.658 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:15.658 size: 4.133484 MiB name: evtpool_158157 00:05:15.658 size: 0.026123 MiB name: Session_Pool 00:05:15.658 end mempools------- 00:05:15.658 6 memzones totaling size 4.142822 MiB 00:05:15.658 size: 1.000366 MiB name: RG_ring_0_158157 00:05:15.658 size: 1.000366 MiB name: RG_ring_1_158157 00:05:15.658 size: 1.000366 MiB name: RG_ring_4_158157 00:05:15.658 size: 1.000366 MiB name: RG_ring_5_158157 00:05:15.658 size: 0.125366 MiB name: RG_ring_2_158157 00:05:15.658 size: 0.015991 MiB name: RG_ring_3_158157 00:05:15.658 end memzones------- 00:05:15.658 00:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:15.658 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:15.658 list of free elements. size: 16.857605 MiB 00:05:15.658 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:15.658 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:15.658 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:15.658 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:15.658 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:15.658 element at address: 0x200019200000 with size: 0.999329 MiB 00:05:15.658 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:15.658 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:15.658 element at address: 0x200018a00000 with size: 0.959900 MiB 00:05:15.658 element at address: 0x200019500040 with size: 0.937256 MiB 00:05:15.658 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:15.658 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:05:15.658 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:15.658 element at address: 0x200018e00000 with size: 0.491150 MiB 00:05:15.658 element at address: 0x200019600000 with size: 0.485657 MiB 00:05:15.658 element at address: 0x200012c00000 with size: 0.446167 MiB 00:05:15.658 element at address: 0x200028000000 with size: 0.411072 MiB 00:05:15.658 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:15.658 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:15.658 list of standard malloc elements. size: 199.221497 MiB 00:05:15.658 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:15.658 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:15.658 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:15.658 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:15.658 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:15.658 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:15.658 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:15.658 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:15.658 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:15.658 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:15.658 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:15.658 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:15.658 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:15.658 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:15.658 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:15.658 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:15.658 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:15.658 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:15.658 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:15.658 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:15.658 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:15.658 list of memzone associated elements. size: 599.920898 MiB 00:05:15.658 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:15.658 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:15.658 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:15.658 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:15.658 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:15.658 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_158157_0 00:05:15.658 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:15.658 associated memzone info: size: 48.002930 MiB name: MP_msgpool_158157_0 00:05:15.658 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:15.658 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_158157_0 00:05:15.658 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:15.658 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:15.658 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:15.658 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:15.658 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:15.659 associated memzone info: size: 3.000122 MiB name: MP_evtpool_158157_0 00:05:15.659 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:15.659 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_158157 00:05:15.659 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:15.659 associated memzone info: size: 1.007996 MiB name: MP_evtpool_158157 00:05:15.659 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:15.659 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:15.659 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:15.659 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:15.659 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:15.659 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:15.659 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:15.659 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:15.659 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:15.659 associated memzone info: size: 1.000366 MiB name: RG_ring_0_158157 00:05:15.659 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:15.659 associated memzone info: size: 1.000366 MiB name: RG_ring_1_158157 00:05:15.659 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:15.659 associated memzone info: size: 1.000366 MiB name: RG_ring_4_158157 00:05:15.659 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:15.659 associated memzone info: size: 1.000366 MiB name: RG_ring_5_158157 00:05:15.659 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:15.659 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_158157 00:05:15.659 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:15.659 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_158157 00:05:15.659 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:05:15.659 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:15.659 element at address: 0x200012c72380 with size: 0.500549 MiB 00:05:15.659 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:15.659 element at address: 0x20001967c540 with size: 0.250549 MiB 00:05:15.659 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:15.659 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:15.659 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_158157 00:05:15.659 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:15.659 associated memzone info: size: 0.125366 MiB name: RG_ring_2_158157 00:05:15.659 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:05:15.659 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:15.659 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:05:15.659 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:15.659 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:15.659 associated memzone info: size: 0.015991 MiB name: RG_ring_3_158157 00:05:15.659 element at address: 0x20002806f540 with size: 0.002502 MiB 00:05:15.659 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:15.659 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:15.659 associated memzone info: size: 0.000183 MiB name: MP_msgpool_158157 00:05:15.659 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:15.659 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_158157 00:05:15.659 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:15.659 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_158157 00:05:15.659 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:15.659 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:15.659 00:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:15.659 00:49:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 158157 00:05:15.659 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 158157 ']' 00:05:15.659 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 158157 00:05:15.659 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:15.918 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.918 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158157 00:05:15.918 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.918 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.918 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158157' 00:05:15.918 killing process with pid 158157 00:05:15.918 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 158157 00:05:15.918 00:49:22 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 158157 00:05:18.455 00:05:18.455 real 0m3.791s 00:05:18.455 user 0m3.764s 00:05:18.455 sys 0m0.548s 00:05:18.455 00:49:24 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.455 00:49:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.455 ************************************ 00:05:18.455 END TEST dpdk_mem_utility 00:05:18.455 ************************************ 00:05:18.455 00:49:24 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event.sh 00:05:18.455 00:49:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.455 00:49:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.455 00:49:24 -- common/autotest_common.sh@10 -- # set +x 00:05:18.455 ************************************ 00:05:18.455 START TEST event 00:05:18.455 ************************************ 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event.sh 00:05:18.455 * Looking for test storage... 00:05:18.455 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.455 00:49:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.455 00:49:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.455 00:49:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.455 00:49:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.455 00:49:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.455 00:49:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.455 00:49:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.455 00:49:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.455 00:49:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.455 00:49:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.455 00:49:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.455 00:49:24 event -- scripts/common.sh@344 -- # case "$op" in 00:05:18.455 00:49:24 event -- scripts/common.sh@345 -- # : 1 00:05:18.455 00:49:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.455 00:49:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.455 00:49:24 event -- scripts/common.sh@365 -- # decimal 1 00:05:18.455 00:49:24 event -- scripts/common.sh@353 -- # local d=1 00:05:18.455 00:49:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.455 00:49:24 event -- scripts/common.sh@355 -- # echo 1 00:05:18.455 00:49:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.455 00:49:24 event -- scripts/common.sh@366 -- # decimal 2 00:05:18.455 00:49:24 event -- scripts/common.sh@353 -- # local d=2 00:05:18.455 00:49:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.455 00:49:24 event -- scripts/common.sh@355 -- # echo 2 00:05:18.455 00:49:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.455 00:49:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.455 00:49:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.455 00:49:24 event -- scripts/common.sh@368 -- # return 0 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.455 --rc genhtml_branch_coverage=1 00:05:18.455 --rc genhtml_function_coverage=1 00:05:18.455 --rc genhtml_legend=1 00:05:18.455 --rc geninfo_all_blocks=1 00:05:18.455 --rc geninfo_unexecuted_blocks=1 00:05:18.455 00:05:18.455 ' 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.455 --rc genhtml_branch_coverage=1 00:05:18.455 --rc genhtml_function_coverage=1 00:05:18.455 --rc genhtml_legend=1 00:05:18.455 --rc geninfo_all_blocks=1 00:05:18.455 --rc geninfo_unexecuted_blocks=1 00:05:18.455 00:05:18.455 ' 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.455 --rc genhtml_branch_coverage=1 00:05:18.455 --rc genhtml_function_coverage=1 00:05:18.455 --rc genhtml_legend=1 00:05:18.455 --rc geninfo_all_blocks=1 00:05:18.455 --rc geninfo_unexecuted_blocks=1 00:05:18.455 00:05:18.455 ' 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.455 --rc genhtml_branch_coverage=1 00:05:18.455 --rc genhtml_function_coverage=1 00:05:18.455 --rc genhtml_legend=1 00:05:18.455 --rc geninfo_all_blocks=1 00:05:18.455 --rc geninfo_unexecuted_blocks=1 00:05:18.455 00:05:18.455 ' 00:05:18.455 00:49:24 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:18.455 00:49:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:18.455 00:49:24 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:18.455 00:49:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.455 00:49:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.455 ************************************ 00:05:18.455 START TEST event_perf 00:05:18.455 ************************************ 00:05:18.456 00:49:24 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.456 Running I/O for 1 seconds...[2024-11-19 00:49:25.026201] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:18.456 [2024-11-19 00:49:25.026280] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158904 ] 00:05:18.714 [2024-11-19 00:49:25.148958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.714 [2024-11-19 00:49:25.257839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.714 [2024-11-19 00:49:25.257916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.715 [2024-11-19 00:49:25.257984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.715 Running I/O for 1 seconds...[2024-11-19 00:49:25.258006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.090 00:05:20.090 lcore 0: 210572 00:05:20.090 lcore 1: 210571 00:05:20.090 lcore 2: 210570 00:05:20.090 lcore 3: 210572 00:05:20.090 done. 00:05:20.090 00:05:20.090 real 0m1.494s 00:05:20.090 user 0m4.351s 00:05:20.090 sys 0m0.138s 00:05:20.090 00:49:26 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.090 00:49:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.090 ************************************ 00:05:20.090 END TEST event_perf 00:05:20.090 ************************************ 00:05:20.090 00:49:26 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:20.090 00:49:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:20.090 00:49:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.090 00:49:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.090 ************************************ 00:05:20.090 START TEST event_reactor 00:05:20.090 ************************************ 00:05:20.090 00:49:26 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:20.090 [2024-11-19 00:49:26.589639] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:20.090 [2024-11-19 00:49:26.589709] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159156 ] 00:05:20.090 [2024-11-19 00:49:26.707367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.349 [2024-11-19 00:49:26.812635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.738 test_start 00:05:21.738 oneshot 00:05:21.738 tick 100 00:05:21.738 tick 100 00:05:21.738 tick 250 00:05:21.738 tick 100 00:05:21.738 tick 100 00:05:21.738 tick 100 00:05:21.738 tick 250 00:05:21.738 tick 500 00:05:21.738 tick 100 00:05:21.738 tick 100 00:05:21.738 tick 250 00:05:21.738 tick 100 00:05:21.738 tick 100 00:05:21.738 test_end 00:05:21.738 00:05:21.738 real 0m1.470s 00:05:21.738 user 0m1.343s 00:05:21.738 sys 0m0.120s 00:05:21.738 00:49:28 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.738 00:49:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:21.738 ************************************ 00:05:21.738 END TEST event_reactor 00:05:21.738 ************************************ 00:05:21.738 00:49:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:21.738 00:49:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:21.738 00:49:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.738 00:49:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.738 ************************************ 00:05:21.738 START TEST event_reactor_perf 00:05:21.738 ************************************ 00:05:21.738 00:49:28 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:21.738 [2024-11-19 00:49:28.136119] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:21.738 [2024-11-19 00:49:28.136204] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159409 ] 00:05:21.738 [2024-11-19 00:49:28.264176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.738 [2024-11-19 00:49:28.368808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.119 test_start 00:05:23.119 test_end 00:05:23.119 Performance: 399482 events per second 00:05:23.119 00:05:23.119 real 0m1.487s 00:05:23.119 user 0m1.346s 00:05:23.119 sys 0m0.134s 00:05:23.119 00:49:29 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.119 00:49:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.119 ************************************ 00:05:23.119 END TEST event_reactor_perf 00:05:23.119 ************************************ 00:05:23.119 00:49:29 event -- event/event.sh@49 -- # uname -s 00:05:23.119 00:49:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:23.119 00:49:29 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.119 00:49:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.119 00:49:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.119 00:49:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.119 ************************************ 00:05:23.119 START TEST event_scheduler 00:05:23.119 ************************************ 00:05:23.119 00:49:29 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.119 * Looking for test storage... 00:05:23.119 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler 00:05:23.119 00:49:29 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.119 00:49:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.119 00:49:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.378 00:49:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.378 00:49:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:23.378 00:49:29 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.378 00:49:29 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.378 --rc genhtml_branch_coverage=1 00:05:23.378 --rc genhtml_function_coverage=1 00:05:23.378 --rc genhtml_legend=1 00:05:23.378 --rc geninfo_all_blocks=1 00:05:23.378 --rc geninfo_unexecuted_blocks=1 00:05:23.378 00:05:23.378 ' 00:05:23.378 00:49:29 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.378 --rc genhtml_branch_coverage=1 00:05:23.378 --rc genhtml_function_coverage=1 00:05:23.378 --rc genhtml_legend=1 00:05:23.378 --rc geninfo_all_blocks=1 00:05:23.378 --rc geninfo_unexecuted_blocks=1 00:05:23.378 00:05:23.378 ' 00:05:23.378 00:49:29 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.378 --rc genhtml_branch_coverage=1 00:05:23.378 --rc genhtml_function_coverage=1 00:05:23.378 --rc genhtml_legend=1 00:05:23.378 --rc geninfo_all_blocks=1 00:05:23.378 --rc geninfo_unexecuted_blocks=1 00:05:23.378 00:05:23.378 ' 00:05:23.378 00:49:29 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.378 --rc genhtml_branch_coverage=1 00:05:23.378 --rc genhtml_function_coverage=1 00:05:23.378 --rc genhtml_legend=1 00:05:23.378 --rc geninfo_all_blocks=1 00:05:23.378 --rc geninfo_unexecuted_blocks=1 00:05:23.378 00:05:23.378 ' 00:05:23.378 00:49:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:23.379 00:49:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=159818 00:05:23.379 00:49:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:23.379 00:49:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.379 00:49:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 159818 00:05:23.379 00:49:29 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 159818 ']' 00:05:23.379 00:49:29 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.379 00:49:29 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.379 00:49:29 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.379 00:49:29 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.379 00:49:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.379 [2024-11-19 00:49:29.909480] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:23.379 [2024-11-19 00:49:29.909574] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159818 ] 00:05:23.379 [2024-11-19 00:49:30.034544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.638 [2024-11-19 00:49:30.154178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.638 [2024-11-19 00:49:30.154266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.638 [2024-11-19 00:49:30.154272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.638 [2024-11-19 00:49:30.154307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.205 00:49:30 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.205 00:49:30 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:24.205 00:49:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:24.205 00:49:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.205 00:49:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.205 [2024-11-19 00:49:30.752777] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:24.205 [2024-11-19 00:49:30.752803] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:24.205 [2024-11-19 00:49:30.752820] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:24.205 [2024-11-19 00:49:30.752830] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:24.205 [2024-11-19 00:49:30.752839] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:24.205 00:49:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.205 00:49:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:24.205 00:49:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.205 00:49:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.465 [2024-11-19 00:49:31.063151] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:24.465 00:49:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.465 00:49:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:24.465 00:49:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.465 00:49:31 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.465 00:49:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.465 ************************************ 00:05:24.465 START TEST scheduler_create_thread 00:05:24.465 ************************************ 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.465 2 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.465 3 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.465 4 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.465 5 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.465 6 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.465 7 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.465 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.724 8 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.724 9 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.724 10 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.724 00:49:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.098 00:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.098 00:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:26.098 00:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:26.098 00:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.098 00:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.032 00:49:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.032 00:05:27.032 real 0m2.627s 00:05:27.032 user 0m0.025s 00:05:27.032 sys 0m0.005s 00:05:27.032 00:49:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.032 00:49:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.032 ************************************ 00:05:27.032 END TEST scheduler_create_thread 00:05:27.032 ************************************ 00:05:27.291 00:49:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:27.291 00:49:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 159818 00:05:27.291 00:49:33 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 159818 ']' 00:05:27.291 00:49:33 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 159818 00:05:27.291 00:49:33 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:27.291 00:49:33 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.291 00:49:33 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 159818 00:05:27.291 00:49:33 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:27.291 00:49:33 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:27.291 00:49:33 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 159818' 00:05:27.291 killing process with pid 159818 00:05:27.291 00:49:33 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 159818 00:05:27.291 00:49:33 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 159818 00:05:27.549 [2024-11-19 00:49:34.203848] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.927 00:05:28.927 real 0m5.660s 00:05:28.927 user 0m9.997s 00:05:28.927 sys 0m0.503s 00:05:28.927 00:49:35 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.927 00:49:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.927 ************************************ 00:05:28.927 END TEST event_scheduler 00:05:28.927 ************************************ 00:05:28.927 00:49:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.927 00:49:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.927 00:49:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.927 00:49:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.927 00:49:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.927 ************************************ 00:05:28.927 START TEST app_repeat 00:05:28.927 ************************************ 00:05:28.927 00:49:35 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:28.927 00:49:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.927 00:49:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.927 00:49:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:28.927 00:49:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.927 00:49:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:28.927 00:49:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:28.927 00:49:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:28.927 00:49:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=160822 00:05:28.928 00:49:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.928 00:49:35 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.928 00:49:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 160822' 00:05:28.928 Process app_repeat pid: 160822 00:05:28.928 00:49:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.928 00:49:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.928 spdk_app_start Round 0 00:05:28.928 00:49:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 160822 /var/tmp/spdk-nbd.sock 00:05:28.928 00:49:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 160822 ']' 00:05:28.928 00:49:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.928 00:49:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.928 00:49:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.928 00:49:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.928 00:49:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.928 [2024-11-19 00:49:35.448016] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:28.928 [2024-11-19 00:49:35.448102] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160822 ] 00:05:28.928 [2024-11-19 00:49:35.570605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.187 [2024-11-19 00:49:35.680374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.187 [2024-11-19 00:49:35.680397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.756 00:49:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.756 00:49:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:29.756 00:49:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.015 Malloc0 00:05:30.015 00:49:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.275 Malloc1 00:05:30.275 00:49:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.275 00:49:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.534 /dev/nbd0 00:05:30.534 00:49:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.534 00:49:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.534 1+0 records in 00:05:30.534 1+0 records out 00:05:30.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213721 s, 19.2 MB/s 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.534 00:49:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.534 00:49:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.534 00:49:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.535 00:49:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.793 /dev/nbd1 00:05:30.793 00:49:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.793 00:49:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.793 1+0 records in 00:05:30.793 1+0 records out 00:05:30.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157648 s, 26.0 MB/s 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.793 00:49:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.793 00:49:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.793 00:49:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.794 00:49:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.794 00:49:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.794 00:49:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.053 { 00:05:31.053 "nbd_device": "/dev/nbd0", 00:05:31.053 "bdev_name": "Malloc0" 00:05:31.053 }, 00:05:31.053 { 00:05:31.053 "nbd_device": "/dev/nbd1", 00:05:31.053 "bdev_name": "Malloc1" 00:05:31.053 } 00:05:31.053 ]' 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.053 { 00:05:31.053 "nbd_device": "/dev/nbd0", 00:05:31.053 "bdev_name": "Malloc0" 00:05:31.053 }, 00:05:31.053 { 00:05:31.053 "nbd_device": "/dev/nbd1", 00:05:31.053 "bdev_name": "Malloc1" 00:05:31.053 } 00:05:31.053 ]' 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.053 /dev/nbd1' 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.053 /dev/nbd1' 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.053 256+0 records in 00:05:31.053 256+0 records out 00:05:31.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00535136 s, 196 MB/s 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.053 256+0 records in 00:05:31.053 256+0 records out 00:05:31.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159105 s, 65.9 MB/s 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.053 256+0 records in 00:05:31.053 256+0 records out 00:05:31.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196196 s, 53.4 MB/s 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.053 00:49:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.054 00:49:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.313 00:49:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.313 00:49:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.313 00:49:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.313 00:49:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.313 00:49:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.313 00:49:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.313 00:49:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.313 00:49:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.313 00:49:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.313 00:49:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.572 00:49:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.572 00:49:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.572 00:49:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.572 00:49:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.572 00:49:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.572 00:49:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.572 00:49:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.572 00:49:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.572 00:49:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.572 00:49:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.572 00:49:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.831 00:49:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.831 00:49:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.090 00:49:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.473 [2024-11-19 00:49:39.872570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.473 [2024-11-19 00:49:39.972765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.473 [2024-11-19 00:49:39.972768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.732 [2024-11-19 00:49:40.171287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.732 [2024-11-19 00:49:40.171335] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.110 00:49:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.110 00:49:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:35.110 spdk_app_start Round 1 00:05:35.110 00:49:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 160822 /var/tmp/spdk-nbd.sock 00:05:35.110 00:49:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 160822 ']' 00:05:35.110 00:49:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.110 00:49:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.110 00:49:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.110 00:49:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.110 00:49:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.369 00:49:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.369 00:49:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.369 00:49:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.629 Malloc0 00:05:35.629 00:49:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.889 Malloc1 00:05:35.889 00:49:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.889 00:49:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.148 /dev/nbd0 00:05:36.148 00:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.148 00:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.148 00:49:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.148 00:49:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.148 00:49:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.148 00:49:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.148 00:49:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.148 00:49:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.148 00:49:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.148 00:49:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.149 00:49:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.149 1+0 records in 00:05:36.149 1+0 records out 00:05:36.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192286 s, 21.3 MB/s 00:05:36.149 00:49:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:36.149 00:49:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.149 00:49:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:36.149 00:49:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.149 00:49:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.149 00:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.149 00:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.149 00:49:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.408 /dev/nbd1 00:05:36.408 00:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.408 00:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.408 1+0 records in 00:05:36.408 1+0 records out 00:05:36.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208643 s, 19.6 MB/s 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.408 00:49:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.408 00:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.408 00:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.408 00:49:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.408 00:49:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.408 00:49:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.668 { 00:05:36.668 "nbd_device": "/dev/nbd0", 00:05:36.668 "bdev_name": "Malloc0" 00:05:36.668 }, 00:05:36.668 { 00:05:36.668 "nbd_device": "/dev/nbd1", 00:05:36.668 "bdev_name": "Malloc1" 00:05:36.668 } 00:05:36.668 ]' 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.668 { 00:05:36.668 "nbd_device": "/dev/nbd0", 00:05:36.668 "bdev_name": "Malloc0" 00:05:36.668 }, 00:05:36.668 { 00:05:36.668 "nbd_device": "/dev/nbd1", 00:05:36.668 "bdev_name": "Malloc1" 00:05:36.668 } 00:05:36.668 ]' 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.668 /dev/nbd1' 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.668 /dev/nbd1' 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.668 256+0 records in 00:05:36.668 256+0 records out 00:05:36.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104634 s, 100 MB/s 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.668 256+0 records in 00:05:36.668 256+0 records out 00:05:36.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161932 s, 64.8 MB/s 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.668 256+0 records in 00:05:36.668 256+0 records out 00:05:36.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192562 s, 54.5 MB/s 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.668 00:49:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.927 00:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.927 00:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.927 00:49:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.927 00:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.927 00:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.927 00:49:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.927 00:49:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.927 00:49:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.927 00:49:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.927 00:49:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.187 00:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.447 00:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.447 00:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.447 00:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.447 00:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.447 00:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.447 00:49:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.447 00:49:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.447 00:49:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.447 00:49:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.447 00:49:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.706 00:49:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.089 [2024-11-19 00:49:45.460473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.089 [2024-11-19 00:49:45.562051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.089 [2024-11-19 00:49:45.562066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.089 [2024-11-19 00:49:45.752820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.089 [2024-11-19 00:49:45.752872] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.996 00:49:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.997 00:49:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.997 spdk_app_start Round 2 00:05:40.997 00:49:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 160822 /var/tmp/spdk-nbd.sock 00:05:40.997 00:49:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 160822 ']' 00:05:40.997 00:49:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.997 00:49:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.997 00:49:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.997 00:49:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.997 00:49:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.997 00:49:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.997 00:49:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.997 00:49:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.257 Malloc0 00:05:41.257 00:49:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.517 Malloc1 00:05:41.517 00:49:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.517 00:49:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.776 /dev/nbd0 00:05:41.776 00:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.776 00:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.776 1+0 records in 00:05:41.776 1+0 records out 00:05:41.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231075 s, 17.7 MB/s 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.776 00:49:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.776 00:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.777 00:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.777 00:49:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.777 /dev/nbd1 00:05:42.036 00:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.036 00:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.037 1+0 records in 00:05:42.037 1+0 records out 00:05:42.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218664 s, 18.7 MB/s 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.037 00:49:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.037 00:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.037 00:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.037 00:49:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.037 00:49:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.037 00:49:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.037 00:49:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.037 { 00:05:42.037 "nbd_device": "/dev/nbd0", 00:05:42.037 "bdev_name": "Malloc0" 00:05:42.037 }, 00:05:42.037 { 00:05:42.037 "nbd_device": "/dev/nbd1", 00:05:42.037 "bdev_name": "Malloc1" 00:05:42.037 } 00:05:42.037 ]' 00:05:42.037 00:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.037 00:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.037 { 00:05:42.037 "nbd_device": "/dev/nbd0", 00:05:42.037 "bdev_name": "Malloc0" 00:05:42.037 }, 00:05:42.037 { 00:05:42.037 "nbd_device": "/dev/nbd1", 00:05:42.037 "bdev_name": "Malloc1" 00:05:42.037 } 00:05:42.037 ]' 00:05:42.296 00:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.296 /dev/nbd1' 00:05:42.296 00:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.296 /dev/nbd1' 00:05:42.296 00:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.297 256+0 records in 00:05:42.297 256+0 records out 00:05:42.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01052 s, 99.7 MB/s 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.297 256+0 records in 00:05:42.297 256+0 records out 00:05:42.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165495 s, 63.4 MB/s 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.297 256+0 records in 00:05:42.297 256+0 records out 00:05:42.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186402 s, 56.3 MB/s 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.297 00:49:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.556 00:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.556 00:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.556 00:49:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.556 00:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.556 00:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.556 00:49:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.556 00:49:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.556 00:49:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.556 00:49:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.556 00:49:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.814 00:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.072 00:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.072 00:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.072 00:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.072 00:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.072 00:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.072 00:49:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.072 00:49:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.072 00:49:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.072 00:49:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.072 00:49:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.331 00:49:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.708 [2024-11-19 00:49:51.071537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.708 [2024-11-19 00:49:51.171376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.708 [2024-11-19 00:49:51.171379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.708 [2024-11-19 00:49:51.362507] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.708 [2024-11-19 00:49:51.362556] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.612 00:49:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 160822 /var/tmp/spdk-nbd.sock 00:05:46.612 00:49:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 160822 ']' 00:05:46.612 00:49:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.612 00:49:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.612 00:49:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.612 00:49:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.612 00:49:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:46.612 00:49:53 event.app_repeat -- event/event.sh@39 -- # killprocess 160822 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 160822 ']' 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 160822 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 160822 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 160822' 00:05:46.612 killing process with pid 160822 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@973 -- # kill 160822 00:05:46.612 00:49:53 event.app_repeat -- common/autotest_common.sh@978 -- # wait 160822 00:05:47.550 spdk_app_start is called in Round 0. 00:05:47.550 Shutdown signal received, stop current app iteration 00:05:47.550 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:47.550 spdk_app_start is called in Round 1. 00:05:47.550 Shutdown signal received, stop current app iteration 00:05:47.550 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:47.550 spdk_app_start is called in Round 2. 00:05:47.550 Shutdown signal received, stop current app iteration 00:05:47.550 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:47.550 spdk_app_start is called in Round 3. 00:05:47.550 Shutdown signal received, stop current app iteration 00:05:47.550 00:49:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:47.550 00:49:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:47.550 00:05:47.550 real 0m18.783s 00:05:47.550 user 0m39.876s 00:05:47.550 sys 0m2.596s 00:05:47.550 00:49:54 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.550 00:49:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.550 ************************************ 00:05:47.550 END TEST app_repeat 00:05:47.550 ************************************ 00:05:47.550 00:49:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:47.550 00:49:54 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:47.550 00:49:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.550 00:49:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.550 00:49:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.810 ************************************ 00:05:47.810 START TEST cpu_locks 00:05:47.810 ************************************ 00:05:47.810 00:49:54 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:47.810 * Looking for test storage... 00:05:47.810 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event 00:05:47.810 00:49:54 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.810 00:49:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.810 00:49:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.811 00:49:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.811 00:49:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:47.811 00:49:54 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.811 00:49:54 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.811 --rc genhtml_branch_coverage=1 00:05:47.811 --rc genhtml_function_coverage=1 00:05:47.811 --rc genhtml_legend=1 00:05:47.811 --rc geninfo_all_blocks=1 00:05:47.811 --rc geninfo_unexecuted_blocks=1 00:05:47.811 00:05:47.811 ' 00:05:47.811 00:49:54 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.811 --rc genhtml_branch_coverage=1 00:05:47.811 --rc genhtml_function_coverage=1 00:05:47.811 --rc genhtml_legend=1 00:05:47.811 --rc geninfo_all_blocks=1 00:05:47.811 --rc geninfo_unexecuted_blocks=1 00:05:47.811 00:05:47.811 ' 00:05:47.811 00:49:54 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.811 --rc genhtml_branch_coverage=1 00:05:47.811 --rc genhtml_function_coverage=1 00:05:47.811 --rc genhtml_legend=1 00:05:47.811 --rc geninfo_all_blocks=1 00:05:47.811 --rc geninfo_unexecuted_blocks=1 00:05:47.811 00:05:47.811 ' 00:05:47.811 00:49:54 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.811 --rc genhtml_branch_coverage=1 00:05:47.811 --rc genhtml_function_coverage=1 00:05:47.811 --rc genhtml_legend=1 00:05:47.811 --rc geninfo_all_blocks=1 00:05:47.811 --rc geninfo_unexecuted_blocks=1 00:05:47.811 00:05:47.811 ' 00:05:47.811 00:49:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:47.811 00:49:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:47.811 00:49:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:47.811 00:49:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:47.811 00:49:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.811 00:49:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.811 00:49:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.811 ************************************ 00:05:47.811 START TEST default_locks 00:05:47.811 ************************************ 00:05:47.811 00:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:47.811 00:49:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=164120 00:05:47.811 00:49:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 164120 00:05:47.811 00:49:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.811 00:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 164120 ']' 00:05:47.811 00:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.811 00:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.811 00:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.811 00:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.811 00:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.071 [2024-11-19 00:49:54.544759] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:48.071 [2024-11-19 00:49:54.544851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164120 ] 00:05:48.071 [2024-11-19 00:49:54.669104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.330 [2024-11-19 00:49:54.773376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.900 00:49:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.900 00:49:55 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:48.900 00:49:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 164120 00:05:48.900 00:49:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 164120 00:05:48.900 00:49:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.469 lslocks: write error 00:05:49.469 00:49:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 164120 00:05:49.469 00:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 164120 ']' 00:05:49.469 00:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 164120 00:05:49.469 00:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:49.469 00:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.469 00:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 164120 00:05:49.469 00:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.469 00:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.469 00:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 164120' 00:05:49.469 killing process with pid 164120 00:05:49.469 00:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 164120 00:05:49.469 00:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 164120 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 164120 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 164120 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 164120 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 164120 ']' 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.008 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (164120) - No such process 00:05:52.008 ERROR: process (pid: 164120) is no longer running 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.008 00:05:52.008 real 0m3.918s 00:05:52.008 user 0m3.903s 00:05:52.008 sys 0m0.677s 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.008 00:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.008 ************************************ 00:05:52.008 END TEST default_locks 00:05:52.008 ************************************ 00:05:52.008 00:49:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:52.008 00:49:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.009 00:49:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.009 00:49:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.009 ************************************ 00:05:52.009 START TEST default_locks_via_rpc 00:05:52.009 ************************************ 00:05:52.009 00:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:52.009 00:49:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=164783 00:05:52.009 00:49:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 164783 00:05:52.009 00:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 164783 ']' 00:05:52.009 00:49:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.009 00:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.009 00:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.009 00:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.009 00:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.009 00:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.009 [2024-11-19 00:49:58.526029] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:52.009 [2024-11-19 00:49:58.526120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164783 ] 00:05:52.009 [2024-11-19 00:49:58.650685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.269 [2024-11-19 00:49:58.755848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.208 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.208 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.208 00:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 164783 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 164783 00:05:53.209 00:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.468 00:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 164783 00:05:53.468 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 164783 ']' 00:05:53.468 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 164783 00:05:53.468 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:53.468 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.469 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 164783 00:05:53.469 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.469 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.469 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 164783' 00:05:53.469 killing process with pid 164783 00:05:53.469 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 164783 00:05:53.469 00:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 164783 00:05:56.007 00:05:56.007 real 0m3.832s 00:05:56.007 user 0m3.834s 00:05:56.007 sys 0m0.628s 00:05:56.007 00:50:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.007 00:50:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.007 ************************************ 00:05:56.007 END TEST default_locks_via_rpc 00:05:56.007 ************************************ 00:05:56.007 00:50:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.007 00:50:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.007 00:50:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.007 00:50:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.007 ************************************ 00:05:56.007 START TEST non_locking_app_on_locked_coremask 00:05:56.007 ************************************ 00:05:56.007 00:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:56.007 00:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=165490 00:05:56.007 00:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 165490 /var/tmp/spdk.sock 00:05:56.007 00:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.007 00:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 165490 ']' 00:05:56.007 00:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.007 00:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.007 00:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.007 00:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.007 00:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.007 [2024-11-19 00:50:02.426252] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:56.007 [2024-11-19 00:50:02.426345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165490 ] 00:05:56.007 [2024-11-19 00:50:02.548979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.007 [2024-11-19 00:50:02.654610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.946 00:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.946 00:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.946 00:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=165717 00:05:56.946 00:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 165717 /var/tmp/spdk2.sock 00:05:56.946 00:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:56.946 00:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 165717 ']' 00:05:56.946 00:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.946 00:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.946 00:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.946 00:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.946 00:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.946 [2024-11-19 00:50:03.557665] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:56.946 [2024-11-19 00:50:03.557758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165717 ] 00:05:57.206 [2024-11-19 00:50:03.711729] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.206 [2024-11-19 00:50:03.711775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.466 [2024-11-19 00:50:03.926944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.375 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.375 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.375 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 165490 00:05:59.375 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 165490 00:05:59.375 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.314 lslocks: write error 00:06:00.314 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 165490 00:06:00.314 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 165490 ']' 00:06:00.314 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 165490 00:06:00.314 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.314 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.314 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165490 00:06:00.314 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.315 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.315 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165490' 00:06:00.315 killing process with pid 165490 00:06:00.315 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 165490 00:06:00.315 00:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 165490 00:06:05.592 00:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 165717 00:06:05.592 00:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 165717 ']' 00:06:05.592 00:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 165717 00:06:05.592 00:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.592 00:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.592 00:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165717 00:06:05.592 00:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.592 00:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.592 00:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165717' 00:06:05.592 killing process with pid 165717 00:06:05.592 00:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 165717 00:06:05.592 00:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 165717 00:06:06.973 00:06:06.973 real 0m11.209s 00:06:06.973 user 0m11.468s 00:06:06.973 sys 0m1.258s 00:06:06.973 00:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.973 00:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.973 ************************************ 00:06:06.973 END TEST non_locking_app_on_locked_coremask 00:06:06.973 ************************************ 00:06:06.973 00:50:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:06.973 00:50:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.973 00:50:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.973 00:50:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.973 ************************************ 00:06:06.973 START TEST locking_app_on_unlocked_coremask 00:06:06.973 ************************************ 00:06:06.973 00:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:06.973 00:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=167351 00:06:06.973 00:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 167351 /var/tmp/spdk.sock 00:06:06.973 00:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:06.973 00:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 167351 ']' 00:06:06.973 00:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.973 00:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.973 00:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.973 00:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.973 00:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.233 [2024-11-19 00:50:13.705962] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:07.233 [2024-11-19 00:50:13.706048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167351 ] 00:06:07.233 [2024-11-19 00:50:13.828482] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.233 [2024-11-19 00:50:13.828517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.492 [2024-11-19 00:50:13.936440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.061 00:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.061 00:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.061 00:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.061 00:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=167578 00:06:08.061 00:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 167578 /var/tmp/spdk2.sock 00:06:08.061 00:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 167578 ']' 00:06:08.061 00:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.061 00:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.061 00:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.061 00:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.061 00:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.321 [2024-11-19 00:50:14.829891] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:08.321 [2024-11-19 00:50:14.829975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167578 ] 00:06:08.321 [2024-11-19 00:50:14.985832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.581 [2024-11-19 00:50:15.186896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.127 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.127 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.127 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 167578 00:06:11.127 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 167578 00:06:11.127 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.127 lslocks: write error 00:06:11.127 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 167351 00:06:11.127 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 167351 ']' 00:06:11.127 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 167351 00:06:11.127 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.127 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.127 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167351 00:06:11.387 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.387 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.387 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167351' 00:06:11.387 killing process with pid 167351 00:06:11.387 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 167351 00:06:11.387 00:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 167351 00:06:16.665 00:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 167578 00:06:16.665 00:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 167578 ']' 00:06:16.665 00:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 167578 00:06:16.665 00:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.665 00:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.665 00:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167578 00:06:16.665 00:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.665 00:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.665 00:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167578' 00:06:16.665 killing process with pid 167578 00:06:16.665 00:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 167578 00:06:16.665 00:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 167578 00:06:18.046 00:06:18.046 real 0m11.085s 00:06:18.046 user 0m11.302s 00:06:18.046 sys 0m1.237s 00:06:18.046 00:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.046 00:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.046 ************************************ 00:06:18.046 END TEST locking_app_on_unlocked_coremask 00:06:18.046 ************************************ 00:06:18.306 00:50:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:18.306 00:50:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.306 00:50:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.306 00:50:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.306 ************************************ 00:06:18.306 START TEST locking_app_on_locked_coremask 00:06:18.306 ************************************ 00:06:18.306 00:50:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:18.306 00:50:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=169234 00:06:18.306 00:50:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 169234 /var/tmp/spdk.sock 00:06:18.306 00:50:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.306 00:50:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 169234 ']' 00:06:18.306 00:50:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.306 00:50:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.306 00:50:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.306 00:50:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.306 00:50:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.306 [2024-11-19 00:50:24.860611] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:18.306 [2024-11-19 00:50:24.860716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169234 ] 00:06:18.306 [2024-11-19 00:50:24.982272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.566 [2024-11-19 00:50:25.085527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=169437 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 169437 /var/tmp/spdk2.sock 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 169437 /var/tmp/spdk2.sock 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 169437 /var/tmp/spdk2.sock 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 169437 ']' 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.504 00:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.504 [2024-11-19 00:50:25.976064] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:19.504 [2024-11-19 00:50:25.976177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169437 ] 00:06:19.504 [2024-11-19 00:50:26.132222] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 169234 has claimed it. 00:06:19.504 [2024-11-19 00:50:26.132277] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.073 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (169437) - No such process 00:06:20.073 ERROR: process (pid: 169437) is no longer running 00:06:20.073 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.073 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:20.073 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:20.073 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.073 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.073 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.073 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 169234 00:06:20.073 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 169234 00:06:20.073 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.332 lslocks: write error 00:06:20.332 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 169234 00:06:20.332 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 169234 ']' 00:06:20.332 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 169234 00:06:20.332 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.332 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.332 00:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 169234 00:06:20.592 00:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.592 00:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.592 00:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 169234' 00:06:20.592 killing process with pid 169234 00:06:20.592 00:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 169234 00:06:20.592 00:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 169234 00:06:23.130 00:06:23.130 real 0m4.532s 00:06:23.130 user 0m4.721s 00:06:23.130 sys 0m0.795s 00:06:23.130 00:50:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.130 00:50:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.130 ************************************ 00:06:23.130 END TEST locking_app_on_locked_coremask 00:06:23.130 ************************************ 00:06:23.130 00:50:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:23.130 00:50:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.130 00:50:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.130 00:50:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.130 ************************************ 00:06:23.130 START TEST locking_overlapped_coremask 00:06:23.130 ************************************ 00:06:23.130 00:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:23.130 00:50:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=170144 00:06:23.130 00:50:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 170144 /var/tmp/spdk.sock 00:06:23.130 00:50:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:23.130 00:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 170144 ']' 00:06:23.130 00:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.130 00:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.130 00:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.130 00:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.130 00:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.130 [2024-11-19 00:50:29.457942] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:23.130 [2024-11-19 00:50:29.458046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170144 ] 00:06:23.130 [2024-11-19 00:50:29.581297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.130 [2024-11-19 00:50:29.692373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.130 [2024-11-19 00:50:29.692400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.130 [2024-11-19 00:50:29.692422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=170352 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 170352 /var/tmp/spdk2.sock 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 170352 /var/tmp/spdk2.sock 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 170352 /var/tmp/spdk2.sock 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 170352 ']' 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.068 00:50:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.068 [2024-11-19 00:50:30.649011] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:24.068 [2024-11-19 00:50:30.649099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170352 ] 00:06:24.334 [2024-11-19 00:50:30.808975] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 170144 has claimed it. 00:06:24.334 [2024-11-19 00:50:30.809028] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.593 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (170352) - No such process 00:06:24.593 ERROR: process (pid: 170352) is no longer running 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 170144 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 170144 ']' 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 170144 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.593 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 170144 00:06:24.852 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.852 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.852 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 170144' 00:06:24.852 killing process with pid 170144 00:06:24.852 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 170144 00:06:24.852 00:50:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 170144 00:06:27.390 00:06:27.390 real 0m4.334s 00:06:27.390 user 0m11.958s 00:06:27.390 sys 0m0.612s 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.390 ************************************ 00:06:27.390 END TEST locking_overlapped_coremask 00:06:27.390 ************************************ 00:06:27.390 00:50:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.390 00:50:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.390 00:50:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.390 00:50:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.390 ************************************ 00:06:27.390 START TEST locking_overlapped_coremask_via_rpc 00:06:27.390 ************************************ 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=170869 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 170869 /var/tmp/spdk.sock 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 170869 ']' 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.390 00:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.390 [2024-11-19 00:50:33.862667] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:27.390 [2024-11-19 00:50:33.862756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170869 ] 00:06:27.390 [2024-11-19 00:50:33.983967] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.390 [2024-11-19 00:50:33.984004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.649 [2024-11-19 00:50:34.093996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.649 [2024-11-19 00:50:34.094049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.649 [2024-11-19 00:50:34.094072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.586 00:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.586 00:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:28.586 00:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=171098 00:06:28.586 00:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 171098 /var/tmp/spdk2.sock 00:06:28.586 00:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.586 00:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 171098 ']' 00:06:28.586 00:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.586 00:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.586 00:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.586 00:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.586 00:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.586 [2024-11-19 00:50:35.027300] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:28.586 [2024-11-19 00:50:35.027388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171098 ] 00:06:28.586 [2024-11-19 00:50:35.185311] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.586 [2024-11-19 00:50:35.185376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.845 [2024-11-19 00:50:35.415095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.845 [2024-11-19 00:50:35.415155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.845 [2024-11-19 00:50:35.415178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.382 [2024-11-19 00:50:37.572417] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 170869 has claimed it. 00:06:31.382 request: 00:06:31.382 { 00:06:31.382 "method": "framework_enable_cpumask_locks", 00:06:31.382 "req_id": 1 00:06:31.382 } 00:06:31.382 Got JSON-RPC error response 00:06:31.382 response: 00:06:31.382 { 00:06:31.382 "code": -32603, 00:06:31.382 "message": "Failed to claim CPU core: 2" 00:06:31.382 } 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 170869 /var/tmp/spdk.sock 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 170869 ']' 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 171098 /var/tmp/spdk2.sock 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 171098 ']' 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.382 00:06:31.382 real 0m4.199s 00:06:31.382 user 0m1.162s 00:06:31.382 sys 0m0.189s 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.382 00:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.382 ************************************ 00:06:31.382 END TEST locking_overlapped_coremask_via_rpc 00:06:31.382 ************************************ 00:06:31.382 00:50:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:31.382 00:50:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 170869 ]] 00:06:31.382 00:50:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 170869 00:06:31.382 00:50:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 170869 ']' 00:06:31.382 00:50:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 170869 00:06:31.382 00:50:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:31.382 00:50:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.382 00:50:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 170869 00:06:31.382 00:50:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.382 00:50:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.382 00:50:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 170869' 00:06:31.382 killing process with pid 170869 00:06:31.382 00:50:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 170869 00:06:31.382 00:50:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 170869 00:06:33.917 00:50:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 171098 ]] 00:06:33.917 00:50:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 171098 00:06:33.917 00:50:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 171098 ']' 00:06:33.917 00:50:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 171098 00:06:33.917 00:50:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:33.917 00:50:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.917 00:50:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171098 00:06:33.917 00:50:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:33.917 00:50:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:33.917 00:50:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171098' 00:06:33.917 killing process with pid 171098 00:06:33.917 00:50:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 171098 00:06:33.917 00:50:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 171098 00:06:36.456 00:50:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:36.456 00:50:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:36.456 00:50:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 170869 ]] 00:06:36.456 00:50:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 170869 00:06:36.456 00:50:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 170869 ']' 00:06:36.456 00:50:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 170869 00:06:36.456 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (170869) - No such process 00:06:36.456 00:50:42 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 170869 is not found' 00:06:36.456 Process with pid 170869 is not found 00:06:36.456 00:50:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 171098 ]] 00:06:36.456 00:50:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 171098 00:06:36.456 00:50:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 171098 ']' 00:06:36.456 00:50:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 171098 00:06:36.456 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (171098) - No such process 00:06:36.456 00:50:42 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 171098 is not found' 00:06:36.456 Process with pid 171098 is not found 00:06:36.456 00:50:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:36.456 00:06:36.456 real 0m48.724s 00:06:36.456 user 1m24.315s 00:06:36.456 sys 0m6.596s 00:06:36.456 00:50:42 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.456 00:50:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 ************************************ 00:06:36.456 END TEST cpu_locks 00:06:36.456 ************************************ 00:06:36.456 00:06:36.456 real 1m18.227s 00:06:36.456 user 2m21.493s 00:06:36.456 sys 0m10.470s 00:06:36.456 00:50:43 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.456 00:50:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 ************************************ 00:06:36.456 END TEST event 00:06:36.456 ************************************ 00:06:36.456 00:50:43 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/thread.sh 00:06:36.456 00:50:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.456 00:50:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.456 00:50:43 -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 ************************************ 00:06:36.456 START TEST thread 00:06:36.456 ************************************ 00:06:36.456 00:50:43 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/thread.sh 00:06:36.717 * Looking for test storage... 00:06:36.717 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.717 00:50:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.717 00:50:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.717 00:50:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.717 00:50:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.717 00:50:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.717 00:50:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.717 00:50:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.717 00:50:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.717 00:50:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.717 00:50:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.717 00:50:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.717 00:50:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:36.717 00:50:43 thread -- scripts/common.sh@345 -- # : 1 00:06:36.717 00:50:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.717 00:50:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.717 00:50:43 thread -- scripts/common.sh@365 -- # decimal 1 00:06:36.717 00:50:43 thread -- scripts/common.sh@353 -- # local d=1 00:06:36.717 00:50:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.717 00:50:43 thread -- scripts/common.sh@355 -- # echo 1 00:06:36.717 00:50:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.717 00:50:43 thread -- scripts/common.sh@366 -- # decimal 2 00:06:36.717 00:50:43 thread -- scripts/common.sh@353 -- # local d=2 00:06:36.717 00:50:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.717 00:50:43 thread -- scripts/common.sh@355 -- # echo 2 00:06:36.717 00:50:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.717 00:50:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.717 00:50:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.717 00:50:43 thread -- scripts/common.sh@368 -- # return 0 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.717 --rc genhtml_branch_coverage=1 00:06:36.717 --rc genhtml_function_coverage=1 00:06:36.717 --rc genhtml_legend=1 00:06:36.717 --rc geninfo_all_blocks=1 00:06:36.717 --rc geninfo_unexecuted_blocks=1 00:06:36.717 00:06:36.717 ' 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.717 --rc genhtml_branch_coverage=1 00:06:36.717 --rc genhtml_function_coverage=1 00:06:36.717 --rc genhtml_legend=1 00:06:36.717 --rc geninfo_all_blocks=1 00:06:36.717 --rc geninfo_unexecuted_blocks=1 00:06:36.717 00:06:36.717 ' 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.717 --rc genhtml_branch_coverage=1 00:06:36.717 --rc genhtml_function_coverage=1 00:06:36.717 --rc genhtml_legend=1 00:06:36.717 --rc geninfo_all_blocks=1 00:06:36.717 --rc geninfo_unexecuted_blocks=1 00:06:36.717 00:06:36.717 ' 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.717 --rc genhtml_branch_coverage=1 00:06:36.717 --rc genhtml_function_coverage=1 00:06:36.717 --rc genhtml_legend=1 00:06:36.717 --rc geninfo_all_blocks=1 00:06:36.717 --rc geninfo_unexecuted_blocks=1 00:06:36.717 00:06:36.717 ' 00:06:36.717 00:50:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.717 00:50:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.717 ************************************ 00:06:36.717 START TEST thread_poller_perf 00:06:36.717 ************************************ 00:06:36.717 00:50:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.717 [2024-11-19 00:50:43.324744] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:36.718 [2024-11-19 00:50:43.324822] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172569 ] 00:06:36.978 [2024-11-19 00:50:43.445552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.978 [2024-11-19 00:50:43.546930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.978 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:38.360 [2024-11-18T23:50:45.053Z] ====================================== 00:06:38.360 [2024-11-18T23:50:45.053Z] busy:2109887754 (cyc) 00:06:38.360 [2024-11-18T23:50:45.053Z] total_run_count: 402000 00:06:38.360 [2024-11-18T23:50:45.053Z] tsc_hz: 2100000000 (cyc) 00:06:38.360 [2024-11-18T23:50:45.053Z] ====================================== 00:06:38.360 [2024-11-18T23:50:45.053Z] poller_cost: 5248 (cyc), 2499 (nsec) 00:06:38.360 00:06:38.360 real 0m1.482s 00:06:38.360 user 0m1.358s 00:06:38.360 sys 0m0.117s 00:06:38.360 00:50:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.360 00:50:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.360 ************************************ 00:06:38.360 END TEST thread_poller_perf 00:06:38.360 ************************************ 00:06:38.360 00:50:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:38.360 00:50:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:38.360 00:50:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.360 00:50:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.360 ************************************ 00:06:38.360 START TEST thread_poller_perf 00:06:38.360 ************************************ 00:06:38.360 00:50:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:38.360 [2024-11-19 00:50:44.880406] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:38.360 [2024-11-19 00:50:44.880497] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172823 ] 00:06:38.361 [2024-11-19 00:50:45.000522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.622 [2024-11-19 00:50:45.100978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.622 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:40.004 [2024-11-18T23:50:46.697Z] ====================================== 00:06:40.004 [2024-11-18T23:50:46.697Z] busy:2102390778 (cyc) 00:06:40.004 [2024-11-18T23:50:46.697Z] total_run_count: 5254000 00:06:40.004 [2024-11-18T23:50:46.697Z] tsc_hz: 2100000000 (cyc) 00:06:40.004 [2024-11-18T23:50:46.697Z] ====================================== 00:06:40.004 [2024-11-18T23:50:46.697Z] poller_cost: 400 (cyc), 190 (nsec) 00:06:40.004 00:06:40.004 real 0m1.477s 00:06:40.004 user 0m1.348s 00:06:40.004 sys 0m0.124s 00:06:40.004 00:50:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.004 00:50:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.004 ************************************ 00:06:40.004 END TEST thread_poller_perf 00:06:40.004 ************************************ 00:06:40.004 00:50:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:40.004 00:06:40.004 real 0m3.273s 00:06:40.004 user 0m2.860s 00:06:40.004 sys 0m0.425s 00:06:40.004 00:50:46 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.004 00:50:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.004 ************************************ 00:06:40.004 END TEST thread 00:06:40.004 ************************************ 00:06:40.004 00:50:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:40.004 00:50:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/cmdline.sh 00:06:40.004 00:50:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.004 00:50:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.004 00:50:46 -- common/autotest_common.sh@10 -- # set +x 00:06:40.004 ************************************ 00:06:40.004 START TEST app_cmdline 00:06:40.004 ************************************ 00:06:40.004 00:50:46 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/cmdline.sh 00:06:40.004 * Looking for test storage... 00:06:40.004 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:06:40.004 00:50:46 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.004 00:50:46 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.004 00:50:46 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.004 00:50:46 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.004 00:50:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.004 00:50:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.004 00:50:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.004 00:50:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.004 00:50:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.004 00:50:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.004 00:50:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.004 00:50:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.004 00:50:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.004 00:50:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.004 00:50:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.005 00:50:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:40.005 00:50:46 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.005 00:50:46 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.005 --rc genhtml_branch_coverage=1 00:06:40.005 --rc genhtml_function_coverage=1 00:06:40.005 --rc genhtml_legend=1 00:06:40.005 --rc geninfo_all_blocks=1 00:06:40.005 --rc geninfo_unexecuted_blocks=1 00:06:40.005 00:06:40.005 ' 00:06:40.005 00:50:46 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.005 --rc genhtml_branch_coverage=1 00:06:40.005 --rc genhtml_function_coverage=1 00:06:40.005 --rc genhtml_legend=1 00:06:40.005 --rc geninfo_all_blocks=1 00:06:40.005 --rc geninfo_unexecuted_blocks=1 00:06:40.005 00:06:40.005 ' 00:06:40.005 00:50:46 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.005 --rc genhtml_branch_coverage=1 00:06:40.005 --rc genhtml_function_coverage=1 00:06:40.005 --rc genhtml_legend=1 00:06:40.005 --rc geninfo_all_blocks=1 00:06:40.005 --rc geninfo_unexecuted_blocks=1 00:06:40.005 00:06:40.005 ' 00:06:40.005 00:50:46 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.005 --rc genhtml_branch_coverage=1 00:06:40.005 --rc genhtml_function_coverage=1 00:06:40.005 --rc genhtml_legend=1 00:06:40.005 --rc geninfo_all_blocks=1 00:06:40.005 --rc geninfo_unexecuted_blocks=1 00:06:40.005 00:06:40.005 ' 00:06:40.005 00:50:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:40.005 00:50:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=173116 00:06:40.005 00:50:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 173116 00:06:40.005 00:50:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:40.005 00:50:46 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 173116 ']' 00:06:40.005 00:50:46 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.005 00:50:46 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.005 00:50:46 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.005 00:50:46 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.005 00:50:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.005 [2024-11-19 00:50:46.677951] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:40.005 [2024-11-19 00:50:46.678037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173116 ] 00:06:40.264 [2024-11-19 00:50:46.801736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.264 [2024-11-19 00:50:46.913587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.203 00:50:47 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.203 00:50:47 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:41.203 00:50:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:41.462 { 00:06:41.462 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:06:41.462 "fields": { 00:06:41.462 "major": 25, 00:06:41.462 "minor": 1, 00:06:41.462 "patch": 0, 00:06:41.462 "suffix": "-pre", 00:06:41.462 "commit": "d47eb51c9" 00:06:41.462 } 00:06:41.462 } 00:06:41.462 00:50:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:41.462 00:50:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:41.462 00:50:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:41.462 00:50:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:41.462 00:50:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:41.462 00:50:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.462 00:50:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.462 00:50:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:41.462 00:50:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:41.462 00:50:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:06:41.462 00:50:47 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.462 request: 00:06:41.462 { 00:06:41.462 "method": "env_dpdk_get_mem_stats", 00:06:41.462 "req_id": 1 00:06:41.462 } 00:06:41.462 Got JSON-RPC error response 00:06:41.462 response: 00:06:41.462 { 00:06:41.462 "code": -32601, 00:06:41.462 "message": "Method not found" 00:06:41.462 } 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.722 00:50:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 173116 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 173116 ']' 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 173116 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173116 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173116' 00:06:41.722 killing process with pid 173116 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@973 -- # kill 173116 00:06:41.722 00:50:48 app_cmdline -- common/autotest_common.sh@978 -- # wait 173116 00:06:44.274 00:06:44.274 real 0m4.106s 00:06:44.274 user 0m4.348s 00:06:44.274 sys 0m0.559s 00:06:44.274 00:50:50 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.274 00:50:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.274 ************************************ 00:06:44.274 END TEST app_cmdline 00:06:44.274 ************************************ 00:06:44.274 00:50:50 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/version.sh 00:06:44.274 00:50:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.274 00:50:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.274 00:50:50 -- common/autotest_common.sh@10 -- # set +x 00:06:44.274 ************************************ 00:06:44.274 START TEST version 00:06:44.274 ************************************ 00:06:44.274 00:50:50 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/version.sh 00:06:44.274 * Looking for test storage... 00:06:44.274 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:06:44.274 00:50:50 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.274 00:50:50 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.274 00:50:50 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.274 00:50:50 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.274 00:50:50 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.274 00:50:50 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.274 00:50:50 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.274 00:50:50 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.274 00:50:50 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.274 00:50:50 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.274 00:50:50 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.274 00:50:50 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.274 00:50:50 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.274 00:50:50 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.274 00:50:50 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.274 00:50:50 version -- scripts/common.sh@344 -- # case "$op" in 00:06:44.274 00:50:50 version -- scripts/common.sh@345 -- # : 1 00:06:44.274 00:50:50 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.274 00:50:50 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.274 00:50:50 version -- scripts/common.sh@365 -- # decimal 1 00:06:44.274 00:50:50 version -- scripts/common.sh@353 -- # local d=1 00:06:44.274 00:50:50 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.274 00:50:50 version -- scripts/common.sh@355 -- # echo 1 00:06:44.274 00:50:50 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.274 00:50:50 version -- scripts/common.sh@366 -- # decimal 2 00:06:44.274 00:50:50 version -- scripts/common.sh@353 -- # local d=2 00:06:44.274 00:50:50 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.274 00:50:50 version -- scripts/common.sh@355 -- # echo 2 00:06:44.274 00:50:50 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.274 00:50:50 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.274 00:50:50 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.274 00:50:50 version -- scripts/common.sh@368 -- # return 0 00:06:44.274 00:50:50 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.274 00:50:50 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.274 --rc genhtml_branch_coverage=1 00:06:44.274 --rc genhtml_function_coverage=1 00:06:44.274 --rc genhtml_legend=1 00:06:44.275 --rc geninfo_all_blocks=1 00:06:44.275 --rc geninfo_unexecuted_blocks=1 00:06:44.275 00:06:44.275 ' 00:06:44.275 00:50:50 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.275 --rc genhtml_branch_coverage=1 00:06:44.275 --rc genhtml_function_coverage=1 00:06:44.275 --rc genhtml_legend=1 00:06:44.275 --rc geninfo_all_blocks=1 00:06:44.275 --rc geninfo_unexecuted_blocks=1 00:06:44.275 00:06:44.275 ' 00:06:44.275 00:50:50 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.275 --rc genhtml_branch_coverage=1 00:06:44.275 --rc genhtml_function_coverage=1 00:06:44.275 --rc genhtml_legend=1 00:06:44.275 --rc geninfo_all_blocks=1 00:06:44.275 --rc geninfo_unexecuted_blocks=1 00:06:44.275 00:06:44.275 ' 00:06:44.275 00:50:50 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.275 --rc genhtml_branch_coverage=1 00:06:44.275 --rc genhtml_function_coverage=1 00:06:44.275 --rc genhtml_legend=1 00:06:44.275 --rc geninfo_all_blocks=1 00:06:44.275 --rc geninfo_unexecuted_blocks=1 00:06:44.275 00:06:44.275 ' 00:06:44.275 00:50:50 version -- app/version.sh@17 -- # get_header_version major 00:06:44.275 00:50:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:44.275 00:50:50 version -- app/version.sh@14 -- # cut -f2 00:06:44.275 00:50:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.275 00:50:50 version -- app/version.sh@17 -- # major=25 00:06:44.275 00:50:50 version -- app/version.sh@18 -- # get_header_version minor 00:06:44.275 00:50:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:44.275 00:50:50 version -- app/version.sh@14 -- # cut -f2 00:06:44.275 00:50:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.275 00:50:50 version -- app/version.sh@18 -- # minor=1 00:06:44.275 00:50:50 version -- app/version.sh@19 -- # get_header_version patch 00:06:44.275 00:50:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:44.275 00:50:50 version -- app/version.sh@14 -- # cut -f2 00:06:44.275 00:50:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.275 00:50:50 version -- app/version.sh@19 -- # patch=0 00:06:44.275 00:50:50 version -- app/version.sh@20 -- # get_header_version suffix 00:06:44.275 00:50:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:44.275 00:50:50 version -- app/version.sh@14 -- # cut -f2 00:06:44.275 00:50:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.275 00:50:50 version -- app/version.sh@20 -- # suffix=-pre 00:06:44.275 00:50:50 version -- app/version.sh@22 -- # version=25.1 00:06:44.275 00:50:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:44.275 00:50:50 version -- app/version.sh@28 -- # version=25.1rc0 00:06:44.275 00:50:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:06:44.275 00:50:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:44.275 00:50:50 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:44.275 00:50:50 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:44.275 00:06:44.275 real 0m0.244s 00:06:44.275 user 0m0.152s 00:06:44.275 sys 0m0.136s 00:06:44.275 00:50:50 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.275 00:50:50 version -- common/autotest_common.sh@10 -- # set +x 00:06:44.275 ************************************ 00:06:44.275 END TEST version 00:06:44.275 ************************************ 00:06:44.275 00:50:50 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:44.275 00:50:50 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:44.275 00:50:50 -- spdk/autotest.sh@194 -- # uname -s 00:06:44.275 00:50:50 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:44.275 00:50:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:44.275 00:50:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:44.275 00:50:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:44.275 00:50:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:44.275 00:50:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:44.275 00:50:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.275 00:50:50 -- common/autotest_common.sh@10 -- # set +x 00:06:44.275 00:50:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:44.275 00:50:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:44.275 00:50:50 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:44.275 00:50:50 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:44.275 00:50:50 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:06:44.275 00:50:50 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:44.275 00:50:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.275 00:50:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.275 00:50:50 -- common/autotest_common.sh@10 -- # set +x 00:06:44.275 ************************************ 00:06:44.275 START TEST nvmf_rdma 00:06:44.275 ************************************ 00:06:44.275 00:50:50 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:44.535 * Looking for test storage... 00:06:44.535 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.535 00:50:51 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.535 --rc genhtml_branch_coverage=1 00:06:44.535 --rc genhtml_function_coverage=1 00:06:44.535 --rc genhtml_legend=1 00:06:44.535 --rc geninfo_all_blocks=1 00:06:44.535 --rc geninfo_unexecuted_blocks=1 00:06:44.535 00:06:44.535 ' 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.535 --rc genhtml_branch_coverage=1 00:06:44.535 --rc genhtml_function_coverage=1 00:06:44.535 --rc genhtml_legend=1 00:06:44.535 --rc geninfo_all_blocks=1 00:06:44.535 --rc geninfo_unexecuted_blocks=1 00:06:44.535 00:06:44.535 ' 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.535 --rc genhtml_branch_coverage=1 00:06:44.535 --rc genhtml_function_coverage=1 00:06:44.535 --rc genhtml_legend=1 00:06:44.535 --rc geninfo_all_blocks=1 00:06:44.535 --rc geninfo_unexecuted_blocks=1 00:06:44.535 00:06:44.535 ' 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.535 --rc genhtml_branch_coverage=1 00:06:44.535 --rc genhtml_function_coverage=1 00:06:44.535 --rc genhtml_legend=1 00:06:44.535 --rc geninfo_all_blocks=1 00:06:44.535 --rc geninfo_unexecuted_blocks=1 00:06:44.535 00:06:44.535 ' 00:06:44.535 00:50:51 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:06:44.535 00:50:51 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:44.535 00:50:51 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.535 00:50:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:44.535 ************************************ 00:06:44.535 START TEST nvmf_target_core 00:06:44.535 ************************************ 00:06:44.535 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:44.795 * Looking for test storage... 00:06:44.795 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:44.795 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.796 --rc genhtml_branch_coverage=1 00:06:44.796 --rc genhtml_function_coverage=1 00:06:44.796 --rc genhtml_legend=1 00:06:44.796 --rc geninfo_all_blocks=1 00:06:44.796 --rc geninfo_unexecuted_blocks=1 00:06:44.796 00:06:44.796 ' 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.796 --rc genhtml_branch_coverage=1 00:06:44.796 --rc genhtml_function_coverage=1 00:06:44.796 --rc genhtml_legend=1 00:06:44.796 --rc geninfo_all_blocks=1 00:06:44.796 --rc geninfo_unexecuted_blocks=1 00:06:44.796 00:06:44.796 ' 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.796 --rc genhtml_branch_coverage=1 00:06:44.796 --rc genhtml_function_coverage=1 00:06:44.796 --rc genhtml_legend=1 00:06:44.796 --rc geninfo_all_blocks=1 00:06:44.796 --rc geninfo_unexecuted_blocks=1 00:06:44.796 00:06:44.796 ' 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.796 --rc genhtml_branch_coverage=1 00:06:44.796 --rc genhtml_function_coverage=1 00:06:44.796 --rc genhtml_legend=1 00:06:44.796 --rc geninfo_all_blocks=1 00:06:44.796 --rc geninfo_unexecuted_blocks=1 00:06:44.796 00:06:44.796 ' 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.796 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.796 ************************************ 00:06:44.796 START TEST nvmf_abort 00:06:44.796 ************************************ 00:06:44.796 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:45.058 * Looking for test storage... 00:06:45.058 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.058 --rc genhtml_branch_coverage=1 00:06:45.058 --rc genhtml_function_coverage=1 00:06:45.058 --rc genhtml_legend=1 00:06:45.058 --rc geninfo_all_blocks=1 00:06:45.058 --rc geninfo_unexecuted_blocks=1 00:06:45.058 00:06:45.058 ' 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.058 --rc genhtml_branch_coverage=1 00:06:45.058 --rc genhtml_function_coverage=1 00:06:45.058 --rc genhtml_legend=1 00:06:45.058 --rc geninfo_all_blocks=1 00:06:45.058 --rc geninfo_unexecuted_blocks=1 00:06:45.058 00:06:45.058 ' 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.058 --rc genhtml_branch_coverage=1 00:06:45.058 --rc genhtml_function_coverage=1 00:06:45.058 --rc genhtml_legend=1 00:06:45.058 --rc geninfo_all_blocks=1 00:06:45.058 --rc geninfo_unexecuted_blocks=1 00:06:45.058 00:06:45.058 ' 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.058 --rc genhtml_branch_coverage=1 00:06:45.058 --rc genhtml_function_coverage=1 00:06:45.058 --rc genhtml_legend=1 00:06:45.058 --rc geninfo_all_blocks=1 00:06:45.058 --rc geninfo_unexecuted_blocks=1 00:06:45.058 00:06:45.058 ' 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.058 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.059 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:45.059 00:50:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:51.637 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:51.637 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@405 -- # modinfo irdma 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:51.637 Found net devices under 0000:af:00.0: cvl_0_0 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:51.637 Found net devices under 0000:af:00.1: cvl_0_1 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:51.637 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo cvl_0_0 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo cvl_0_1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:06:51.638 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:06:51.638 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:06:51.638 altname enp175s0f0np0 00:06:51.638 altname ens801f0np0 00:06:51.638 inet 192.168.100.8/24 scope global cvl_0_0 00:06:51.638 valid_lft forever preferred_lft forever 00:06:51.638 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:06:51.638 valid_lft forever preferred_lft forever 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:06:51.638 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:06:51.638 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:06:51.638 altname enp175s0f1np1 00:06:51.638 altname ens801f1np1 00:06:51.638 inet 192.168.100.9/24 scope global cvl_0_1 00:06:51.638 valid_lft forever preferred_lft forever 00:06:51.638 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:06:51.638 valid_lft forever preferred_lft forever 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo cvl_0_0 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo cvl_0_1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:51.638 192.168.100.9' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:51.638 192.168.100.9' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:51.638 192.168.100.9' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=177233 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 177233 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 177233 ']' 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.638 00:50:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.638 [2024-11-19 00:50:57.617290] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:51.639 [2024-11-19 00:50:57.617398] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.639 [2024-11-19 00:50:57.744523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.639 [2024-11-19 00:50:57.851633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.639 [2024-11-19 00:50:57.851679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.639 [2024-11-19 00:50:57.851689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.639 [2024-11-19 00:50:57.851699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.639 [2024-11-19 00:50:57.851706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.639 [2024-11-19 00:50:57.853939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.639 [2024-11-19 00:50:57.853998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.639 [2024-11-19 00:50:57.854019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.897 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.897 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:51.897 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:51.897 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.897 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.897 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.897 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:06:51.897 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.897 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.897 [2024-11-19 00:50:58.480841] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000028fc0/0x617000007c40) succeed. 00:06:51.898 [2024-11-19 00:50:58.490208] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029140/0x617000007fc0) succeed. 00:06:51.898 [2024-11-19 00:50:58.490234] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:06:51.898 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.898 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:51.898 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.898 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.898 Malloc0 00:06:51.898 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.898 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:51.898 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.898 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.157 Delay0 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.157 [2024-11-19 00:50:58.621161] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.157 00:50:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:52.157 [2024-11-19 00:50:58.766760] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:54.689 Initializing NVMe Controllers 00:06:54.689 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:06:54.689 controller IO queue size 128 less than required 00:06:54.689 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:54.689 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:54.689 Initialization complete. Launching workers. 00:06:54.689 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38312 00:06:54.689 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38373, failed to submit 62 00:06:54.689 success 38316, unsuccessful 57, failed 0 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:54.689 00:51:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:54.689 rmmod nvme_rdma 00:06:54.689 rmmod nvme_fabrics 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 177233 ']' 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 177233 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 177233 ']' 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 177233 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 177233 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 177233' 00:06:54.689 killing process with pid 177233 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 177233 00:06:54.689 00:51:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 177233 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:56.068 00:06:56.068 real 0m11.029s 00:06:56.068 user 0m17.376s 00:06:56.068 sys 0m4.985s 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.068 ************************************ 00:06:56.068 END TEST nvmf_abort 00:06:56.068 ************************************ 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:56.068 ************************************ 00:06:56.068 START TEST nvmf_ns_hotplug_stress 00:06:56.068 ************************************ 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:56.068 * Looking for test storage... 00:06:56.068 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.068 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.069 --rc genhtml_branch_coverage=1 00:06:56.069 --rc genhtml_function_coverage=1 00:06:56.069 --rc genhtml_legend=1 00:06:56.069 --rc geninfo_all_blocks=1 00:06:56.069 --rc geninfo_unexecuted_blocks=1 00:06:56.069 00:06:56.069 ' 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.069 --rc genhtml_branch_coverage=1 00:06:56.069 --rc genhtml_function_coverage=1 00:06:56.069 --rc genhtml_legend=1 00:06:56.069 --rc geninfo_all_blocks=1 00:06:56.069 --rc geninfo_unexecuted_blocks=1 00:06:56.069 00:06:56.069 ' 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.069 --rc genhtml_branch_coverage=1 00:06:56.069 --rc genhtml_function_coverage=1 00:06:56.069 --rc genhtml_legend=1 00:06:56.069 --rc geninfo_all_blocks=1 00:06:56.069 --rc geninfo_unexecuted_blocks=1 00:06:56.069 00:06:56.069 ' 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.069 --rc genhtml_branch_coverage=1 00:06:56.069 --rc genhtml_function_coverage=1 00:06:56.069 --rc genhtml_legend=1 00:06:56.069 --rc geninfo_all_blocks=1 00:06:56.069 --rc geninfo_unexecuted_blocks=1 00:06:56.069 00:06:56.069 ' 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.069 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:56.069 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:56.070 00:51:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.647 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:02.648 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:02.648 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@405 -- # modinfo irdma 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:02.648 Found net devices under 0000:af:00.0: cvl_0_0 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:02.648 Found net devices under 0000:af:00.1: cvl_0_1 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo cvl_0_0 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo cvl_0_1 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:07:02.648 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:02.648 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:07:02.648 altname enp175s0f0np0 00:07:02.648 altname ens801f0np0 00:07:02.648 inet 192.168.100.8/24 scope global cvl_0_0 00:07:02.648 valid_lft forever preferred_lft forever 00:07:02.648 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:07:02.648 valid_lft forever preferred_lft forever 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:02.648 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:07:02.649 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:02.649 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:07:02.649 altname enp175s0f1np1 00:07:02.649 altname ens801f1np1 00:07:02.649 inet 192.168.100.9/24 scope global cvl_0_1 00:07:02.649 valid_lft forever preferred_lft forever 00:07:02.649 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:07:02.649 valid_lft forever preferred_lft forever 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo cvl_0_0 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo cvl_0_1 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:02.649 192.168.100.9' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:02.649 192.168.100.9' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:02.649 192.168.100.9' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=181473 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 181473 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 181473 ']' 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.649 00:51:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:02.649 [2024-11-19 00:51:08.719100] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:02.649 [2024-11-19 00:51:08.719193] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.649 [2024-11-19 00:51:08.847069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.649 [2024-11-19 00:51:08.956402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.649 [2024-11-19 00:51:08.956446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.649 [2024-11-19 00:51:08.956457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.649 [2024-11-19 00:51:08.956468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.649 [2024-11-19 00:51:08.956475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.649 [2024-11-19 00:51:08.958836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.649 [2024-11-19 00:51:08.958888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.649 [2024-11-19 00:51:08.958910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.908 00:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.908 00:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:02.908 00:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:02.908 00:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:02.908 00:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:02.908 00:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.908 00:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:02.908 00:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:03.167 [2024-11-19 00:51:09.741233] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000028fc0/0x617000007c40) succeed. 00:07:03.167 [2024-11-19 00:51:09.750598] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029140/0x617000007fc0) succeed. 00:07:03.167 [2024-11-19 00:51:09.750627] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:03.167 00:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:03.425 00:51:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:03.683 [2024-11-19 00:51:10.152396] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:03.683 00:51:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:03.942 00:51:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:03.942 Malloc0 00:07:03.942 00:51:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:04.200 Delay0 00:07:04.200 00:51:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.458 00:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:04.716 NULL1 00:07:04.717 00:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:04.975 00:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:04.975 00:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=182208 00:07:04.975 00:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:04.975 00:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.975 00:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.234 00:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:05.234 00:51:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:05.492 true 00:07:05.492 00:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:05.492 00:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.750 00:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.750 00:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:05.750 00:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:06.009 true 00:07:06.009 00:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:06.009 00:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.268 00:51:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.526 00:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:06.526 00:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:06.526 true 00:07:06.526 00:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:06.526 00:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.785 00:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.044 00:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:07.044 00:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:07.302 true 00:07:07.302 00:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:07.302 00:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.302 00:51:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.561 00:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:07.561 00:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:07.819 true 00:07:07.819 00:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:07.819 00:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.078 00:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.336 00:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:08.336 00:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:08.336 true 00:07:08.336 00:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:08.336 00:51:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.594 00:51:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.853 00:51:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:08.853 00:51:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:09.111 true 00:07:09.111 00:51:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:09.111 00:51:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.111 00:51:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.370 00:51:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:09.370 00:51:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:09.628 true 00:07:09.628 00:51:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:09.628 00:51:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.886 00:51:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.886 00:51:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:09.886 00:51:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:10.144 true 00:07:10.144 00:51:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:10.144 00:51:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.403 00:51:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.662 00:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:10.662 00:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:10.662 true 00:07:10.662 00:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:10.662 00:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.919 00:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.178 00:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:11.178 00:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:11.436 true 00:07:11.436 00:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:11.436 00:51:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.436 00:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.694 00:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:11.694 00:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:11.953 true 00:07:11.953 00:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:11.953 00:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.953 00:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.211 00:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:12.211 00:51:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:12.470 true 00:07:12.470 00:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:12.470 00:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.728 00:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.986 00:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:12.986 00:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:12.986 true 00:07:12.986 00:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:12.986 00:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.244 00:51:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.502 00:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:13.502 00:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:13.502 true 00:07:13.761 00:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:13.761 00:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.761 00:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.019 00:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:14.019 00:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:14.278 true 00:07:14.278 00:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:14.278 00:51:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.536 00:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.536 00:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:14.536 00:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:14.795 true 00:07:14.795 00:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:14.795 00:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.053 00:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.311 00:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:15.311 00:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:15.311 true 00:07:15.312 00:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:15.312 00:51:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.570 00:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.828 00:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:15.828 00:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:16.086 true 00:07:16.086 00:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:16.086 00:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.086 00:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.343 00:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:16.343 00:51:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:16.602 true 00:07:16.602 00:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:16.602 00:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.859 00:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.859 00:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:16.859 00:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:17.117 true 00:07:17.117 00:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:17.117 00:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.375 00:51:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.634 00:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:17.634 00:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:17.634 true 00:07:17.634 00:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:17.634 00:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.892 00:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.151 00:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:18.151 00:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:18.409 true 00:07:18.409 00:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:18.409 00:51:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.667 00:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.667 00:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:18.667 00:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:18.925 true 00:07:18.925 00:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:18.925 00:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.183 00:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.442 00:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:19.442 00:51:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:19.442 true 00:07:19.442 00:51:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:19.442 00:51:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.701 00:51:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.959 00:51:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:19.959 00:51:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:19.959 true 00:07:20.218 00:51:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:20.218 00:51:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.218 00:51:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.476 00:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:20.476 00:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:20.735 true 00:07:20.735 00:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:20.735 00:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.992 00:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.992 00:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:20.992 00:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:21.292 true 00:07:21.292 00:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:21.292 00:51:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.550 00:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.808 00:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:21.809 00:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:21.809 true 00:07:21.809 00:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:21.809 00:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.067 00:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.325 00:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:22.325 00:51:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:22.583 true 00:07:22.583 00:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:22.583 00:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.583 00:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.841 00:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:22.841 00:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:23.100 true 00:07:23.100 00:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:23.100 00:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.359 00:51:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.617 00:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:23.618 00:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:23.618 true 00:07:23.618 00:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:23.618 00:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.875 00:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.133 00:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:24.133 00:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:24.133 true 00:07:24.390 00:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:24.390 00:51:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.390 00:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.648 00:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:24.648 00:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:24.906 true 00:07:24.906 00:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:24.906 00:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.164 00:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.164 00:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:25.164 00:51:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:25.422 true 00:07:25.422 00:51:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:25.422 00:51:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.680 00:51:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.938 00:51:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:25.938 00:51:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:26.196 true 00:07:26.196 00:51:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:26.196 00:51:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.196 00:51:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.454 00:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:26.454 00:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:26.712 true 00:07:26.712 00:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:26.712 00:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.970 00:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.970 00:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:26.970 00:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:27.228 true 00:07:27.228 00:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:27.228 00:51:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.486 00:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.745 00:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:27.745 00:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:27.745 true 00:07:27.745 00:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:27.745 00:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.003 00:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.261 00:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:28.261 00:51:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:28.520 true 00:07:28.520 00:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:28.520 00:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.778 00:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.778 00:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:28.778 00:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:29.036 true 00:07:29.036 00:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:29.036 00:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.294 00:51:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.552 00:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:29.552 00:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:29.552 true 00:07:29.552 00:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:29.552 00:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.810 00:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.068 00:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:30.068 00:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:30.326 true 00:07:30.326 00:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:30.326 00:51:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.584 00:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.584 00:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:30.584 00:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:30.842 true 00:07:30.842 00:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:30.842 00:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.100 00:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.358 00:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:31.358 00:51:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:31.358 true 00:07:31.358 00:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:31.358 00:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.617 00:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.875 00:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:31.875 00:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:31.875 true 00:07:32.133 00:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:32.133 00:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.133 00:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.391 00:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:32.391 00:51:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:32.649 true 00:07:32.649 00:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:32.649 00:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.913 00:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.913 00:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:32.913 00:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:33.171 true 00:07:33.171 00:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:33.171 00:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.429 00:51:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.687 00:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:33.687 00:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:33.687 true 00:07:33.687 00:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:33.687 00:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.946 00:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.209 00:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:34.209 00:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:34.468 true 00:07:34.468 00:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:34.468 00:51:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.727 00:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.727 00:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:34.727 00:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:34.986 true 00:07:34.986 00:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:34.986 00:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.244 00:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.502 00:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:35.502 00:51:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:35.502 true 00:07:35.502 00:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:35.502 00:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.760 00:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.017 00:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:36.018 00:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:36.276 Initializing NVMe Controllers 00:07:36.276 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:36.276 Controller IO queue size 128, less than required. 00:07:36.276 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:36.276 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:36.276 Initialization complete. Launching workers. 00:07:36.276 ======================================================== 00:07:36.276 Latency(us) 00:07:36.276 Device Information : IOPS MiB/s Average min max 00:07:36.276 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35306.63 17.24 3625.20 2025.26 5931.84 00:07:36.276 ======================================================== 00:07:36.276 Total : 35306.63 17.24 3625.20 2025.26 5931.84 00:07:36.276 00:07:36.276 true 00:07:36.276 00:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:36.276 00:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.276 00:51:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.533 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:36.533 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:36.791 true 00:07:36.791 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 182208 00:07:36.791 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (182208) - No such process 00:07:36.791 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 182208 00:07:36.791 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.049 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.307 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:37.307 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:37.307 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:37.307 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.307 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:37.307 null0 00:07:37.307 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.307 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.307 00:51:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:37.565 null1 00:07:37.565 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.565 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.565 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:37.823 null2 00:07:37.823 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.823 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.823 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:37.823 null3 00:07:37.823 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.823 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.823 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:38.081 null4 00:07:38.081 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.081 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.081 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:38.338 null5 00:07:38.338 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.338 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.338 00:51:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:38.596 null6 00:07:38.596 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.596 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.597 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:38.855 null7 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.855 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 188031 188033 188034 188036 188038 188040 188042 188044 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.856 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.116 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.374 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.375 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.375 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.375 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.375 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.375 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.375 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.375 00:51:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.633 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.633 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.633 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.633 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.633 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.634 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.892 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.893 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.893 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.893 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.151 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.151 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.151 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.151 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.151 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.151 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.151 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.151 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.151 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.151 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.151 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.409 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.409 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.409 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.409 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.409 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.409 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.409 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.409 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.409 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.409 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.409 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.410 00:51:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.410 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.668 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.668 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.668 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.668 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.668 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.668 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.668 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.668 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.926 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.185 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.444 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.444 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.444 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.444 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.444 00:51:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.444 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.444 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.444 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.444 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.444 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.444 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.444 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.444 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.703 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.962 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.962 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.962 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.962 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.962 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.962 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.962 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.962 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.220 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.221 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.479 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.479 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.479 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.479 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.479 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.479 00:51:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.479 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.737 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.737 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.737 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.737 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.737 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.737 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.737 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.737 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:42.995 rmmod nvme_rdma 00:07:42.995 rmmod nvme_fabrics 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 181473 ']' 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 181473 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 181473 ']' 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 181473 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 181473 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 181473' 00:07:42.995 killing process with pid 181473 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 181473 00:07:42.995 00:51:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 181473 00:07:44.371 00:51:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:44.371 00:51:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:44.371 00:07:44.371 real 0m48.429s 00:07:44.371 user 3m35.420s 00:07:44.371 sys 0m13.860s 00:07:44.371 00:51:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.371 00:51:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:44.371 ************************************ 00:07:44.371 END TEST nvmf_ns_hotplug_stress 00:07:44.371 ************************************ 00:07:44.371 00:51:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:44.371 00:51:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:44.371 00:51:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.371 00:51:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:44.371 ************************************ 00:07:44.371 START TEST nvmf_delete_subsystem 00:07:44.371 ************************************ 00:07:44.371 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:44.632 * Looking for test storage... 00:07:44.632 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:44.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.632 --rc genhtml_branch_coverage=1 00:07:44.632 --rc genhtml_function_coverage=1 00:07:44.632 --rc genhtml_legend=1 00:07:44.632 --rc geninfo_all_blocks=1 00:07:44.632 --rc geninfo_unexecuted_blocks=1 00:07:44.632 00:07:44.632 ' 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:44.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.632 --rc genhtml_branch_coverage=1 00:07:44.632 --rc genhtml_function_coverage=1 00:07:44.632 --rc genhtml_legend=1 00:07:44.632 --rc geninfo_all_blocks=1 00:07:44.632 --rc geninfo_unexecuted_blocks=1 00:07:44.632 00:07:44.632 ' 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:44.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.632 --rc genhtml_branch_coverage=1 00:07:44.632 --rc genhtml_function_coverage=1 00:07:44.632 --rc genhtml_legend=1 00:07:44.632 --rc geninfo_all_blocks=1 00:07:44.632 --rc geninfo_unexecuted_blocks=1 00:07:44.632 00:07:44.632 ' 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:44.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.632 --rc genhtml_branch_coverage=1 00:07:44.632 --rc genhtml_function_coverage=1 00:07:44.632 --rc genhtml_legend=1 00:07:44.632 --rc geninfo_all_blocks=1 00:07:44.632 --rc geninfo_unexecuted_blocks=1 00:07:44.632 00:07:44.632 ' 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:44.632 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.633 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:44.633 00:51:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:51.210 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:51.211 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:51.211 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@405 -- # modinfo irdma 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:51.211 Found net devices under 0000:af:00.0: cvl_0_0 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:51.211 Found net devices under 0000:af:00.1: cvl_0_1 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo cvl_0_0 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.211 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo cvl_0_1 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:07:51.212 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:51.212 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:07:51.212 altname enp175s0f0np0 00:07:51.212 altname ens801f0np0 00:07:51.212 inet 192.168.100.8/24 scope global cvl_0_0 00:07:51.212 valid_lft forever preferred_lft forever 00:07:51.212 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:07:51.212 valid_lft forever preferred_lft forever 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:07:51.212 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:51.212 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:07:51.212 altname enp175s0f1np1 00:07:51.212 altname ens801f1np1 00:07:51.212 inet 192.168.100.9/24 scope global cvl_0_1 00:07:51.212 valid_lft forever preferred_lft forever 00:07:51.212 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:07:51.212 valid_lft forever preferred_lft forever 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:51.212 00:51:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo cvl_0_0 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo cvl_0_1 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:51.212 192.168.100.9' 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:51.212 192.168.100.9' 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:51.212 192.168.100.9' 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=192217 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 192217 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 192217 ']' 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.212 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.212 [2024-11-19 00:51:57.165458] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:51.212 [2024-11-19 00:51:57.165547] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.212 [2024-11-19 00:51:57.292040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:51.212 [2024-11-19 00:51:57.396681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.212 [2024-11-19 00:51:57.396725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.213 [2024-11-19 00:51:57.396736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.213 [2024-11-19 00:51:57.396746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.213 [2024-11-19 00:51:57.396754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.213 [2024-11-19 00:51:57.398774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.213 [2024-11-19 00:51:57.398795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.472 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.472 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:51.472 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:51.472 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.472 00:51:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.472 [2024-11-19 00:51:58.044078] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000028cc0/0x617000007c40) succeed. 00:07:51.472 [2024-11-19 00:51:58.053385] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000028e40/0x617000007fc0) succeed. 00:07:51.472 [2024-11-19 00:51:58.053413] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.472 [2024-11-19 00:51:58.073708] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.472 NULL1 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.472 Delay0 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=192395 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:51.472 00:51:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:51.731 [2024-11-19 00:51:58.237590] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:53.629 00:52:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.629 00:52:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.629 00:52:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.194 [2024-11-19 00:52:00.775045] nvme_rdma.c:2452:nvme_rdma_log_wc_status: *ERROR*: WC error, qid 2, qp state 1, request 0x35184374496688 type 1, status: (12): transport retry counter exceeded 00:07:54.194 NVMe io qpair process completion error 00:07:54.194 NVMe io qpair process completion error 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Write completed with error (sct=0, sc=8) 00:07:54.194 starting I/O failed: -6 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.194 Read completed with error (sct=0, sc=8) 00:07:54.195 starting I/O failed: -6 00:07:54.195 Read completed with error (sct=0, sc=8) 00:07:54.761 [2024-11-19 00:52:01.301254] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 [2024-11-19 00:52:01.302319] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 [2024-11-19 00:52:01.303199] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 00:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.762 00:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:54.762 00:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 192395 00:07:54.762 00:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:55.328 00:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:55.328 00:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 192395 00:07:55.328 00:52:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:55.891 NVMe io qpair process completion error 00:07:55.891 NVMe io qpair process completion error 00:07:55.891 00:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:55.891 00:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 192395 00:07:55.891 00:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:56.150 [2024-11-19 00:52:02.836333] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 [2024-11-19 00:52:02.837137] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 [2024-11-19 00:52:02.841650] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.150 Write completed with error (sct=0, sc=8) 00:07:56.150 Read completed with error (sct=0, sc=8) 00:07:56.151 Read completed with error (sct=0, sc=8) 00:07:56.151 Read completed with error (sct=0, sc=8) 00:07:56.151 Read completed with error (sct=0, sc=8) 00:07:56.151 Read completed with error (sct=0, sc=8) 00:07:56.151 Read completed with error (sct=0, sc=8) 00:07:56.151 Read completed with error (sct=0, sc=8) 00:07:56.410 [2024-11-19 00:52:02.842956] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Write completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Write completed with error (sct=0, sc=8) 00:07:56.410 Write completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Write completed with error (sct=0, sc=8) 00:07:56.410 Write completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Write completed with error (sct=0, sc=8) 00:07:56.410 Write completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Write completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Write completed with error (sct=0, sc=8) 00:07:56.410 Write completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Read completed with error (sct=0, sc=8) 00:07:56.410 Initializing NVMe Controllers 00:07:56.410 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:56.410 Controller IO queue size 128, less than required. 00:07:56.410 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:56.410 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:56.410 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:56.410 Initialization complete. Launching workers. 00:07:56.410 ======================================================== 00:07:56.410 Latency(us) 00:07:56.410 Device Information : IOPS MiB/s Average min max 00:07:56.410 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 140.82 0.07 1314244.23 434824.64 2490838.25 00:07:56.410 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 140.82 0.07 1349206.59 947295.72 2481273.60 00:07:56.410 ======================================================== 00:07:56.410 Total : 281.65 0.14 1331725.41 434824.64 2490838.25 00:07:56.410 00:07:56.410 [2024-11-19 00:52:02.850115] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:07:56.410 00:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:56.410 00:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 192395 00:07:56.410 00:52:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:56.410 [2024-11-19 00:52:02.880315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:07:56.410 [2024-11-19 00:52:02.880344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:07:56.410 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 192395 00:07:56.977 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (192395) - No such process 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 192395 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 192395 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 192395 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:56.977 [2024-11-19 00:52:03.398694] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=193309 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:07:56.977 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.977 [2024-11-19 00:52:03.544786] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:57.235 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.235 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:07:57.235 00:52:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:57.800 00:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.800 00:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:07:57.800 00:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:58.365 00:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:58.365 00:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:07:58.365 00:52:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:58.930 00:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:58.931 00:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:07:58.931 00:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:59.496 00:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:59.496 00:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:07:59.496 00:52:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:59.754 00:52:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:59.754 00:52:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:07:59.754 00:52:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:00.320 00:52:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:00.320 00:52:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:08:00.320 00:52:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:00.886 00:52:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:00.886 00:52:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:08:00.886 00:52:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:01.451 00:52:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:01.451 00:52:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:08:01.451 00:52:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:02.017 00:52:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:02.017 00:52:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:08:02.017 00:52:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:02.275 00:52:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:02.275 00:52:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:08:02.275 00:52:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:02.840 00:52:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:02.840 00:52:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:08:02.840 00:52:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:03.406 00:52:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:03.406 00:52:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:08:03.406 00:52:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:03.973 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:03.973 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:08:03.973 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:04.231 Initializing NVMe Controllers 00:08:04.231 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:04.231 Controller IO queue size 128, less than required. 00:08:04.231 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:04.231 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:04.231 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:04.231 Initialization complete. Launching workers. 00:08:04.231 ======================================================== 00:08:04.231 Latency(us) 00:08:04.231 Device Information : IOPS MiB/s Average min max 00:08:04.231 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001557.84 1000073.49 1004400.67 00:08:04.231 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002879.78 1000141.74 1007328.90 00:08:04.231 ======================================================== 00:08:04.231 Total : 256.00 0.12 1002218.81 1000073.49 1007328.90 00:08:04.231 00:08:04.489 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:04.489 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 193309 00:08:04.489 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (193309) - No such process 00:08:04.489 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 193309 00:08:04.490 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:04.490 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:04.490 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.490 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:04.490 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:04.490 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:04.490 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:04.490 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.490 00:52:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:04.490 rmmod nvme_rdma 00:08:04.490 rmmod nvme_fabrics 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 192217 ']' 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 192217 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 192217 ']' 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 192217 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 192217 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 192217' 00:08:04.490 killing process with pid 192217 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 192217 00:08:04.490 00:52:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 192217 00:08:05.868 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:05.868 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:05.868 00:08:05.868 real 0m21.234s 00:08:05.868 user 0m53.870s 00:08:05.868 sys 0m5.691s 00:08:05.868 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.868 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.868 ************************************ 00:08:05.868 END TEST nvmf_delete_subsystem 00:08:05.868 ************************************ 00:08:05.868 00:52:12 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:05.869 ************************************ 00:08:05.869 START TEST nvmf_host_management 00:08:05.869 ************************************ 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:05.869 * Looking for test storage... 00:08:05.869 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.869 --rc genhtml_branch_coverage=1 00:08:05.869 --rc genhtml_function_coverage=1 00:08:05.869 --rc genhtml_legend=1 00:08:05.869 --rc geninfo_all_blocks=1 00:08:05.869 --rc geninfo_unexecuted_blocks=1 00:08:05.869 00:08:05.869 ' 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.869 --rc genhtml_branch_coverage=1 00:08:05.869 --rc genhtml_function_coverage=1 00:08:05.869 --rc genhtml_legend=1 00:08:05.869 --rc geninfo_all_blocks=1 00:08:05.869 --rc geninfo_unexecuted_blocks=1 00:08:05.869 00:08:05.869 ' 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.869 --rc genhtml_branch_coverage=1 00:08:05.869 --rc genhtml_function_coverage=1 00:08:05.869 --rc genhtml_legend=1 00:08:05.869 --rc geninfo_all_blocks=1 00:08:05.869 --rc geninfo_unexecuted_blocks=1 00:08:05.869 00:08:05.869 ' 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.869 --rc genhtml_branch_coverage=1 00:08:05.869 --rc genhtml_function_coverage=1 00:08:05.869 --rc genhtml_legend=1 00:08:05.869 --rc geninfo_all_blocks=1 00:08:05.869 --rc geninfo_unexecuted_blocks=1 00:08:05.869 00:08:05.869 ' 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:05.869 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.870 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:05.870 00:52:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:12.446 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:12.446 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@405 -- # modinfo irdma 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:12.446 Found net devices under 0000:af:00.0: cvl_0_0 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:12.446 Found net devices under 0000:af:00.1: cvl_0_1 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:12.446 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:08:12.447 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:12.447 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:08:12.447 altname enp175s0f0np0 00:08:12.447 altname ens801f0np0 00:08:12.447 inet 192.168.100.8/24 scope global cvl_0_0 00:08:12.447 valid_lft forever preferred_lft forever 00:08:12.447 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:08:12.447 valid_lft forever preferred_lft forever 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:08:12.447 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:12.447 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:08:12.447 altname enp175s0f1np1 00:08:12.447 altname ens801f1np1 00:08:12.447 inet 192.168.100.9/24 scope global cvl_0_1 00:08:12.447 valid_lft forever preferred_lft forever 00:08:12.447 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:08:12.447 valid_lft forever preferred_lft forever 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:12.447 192.168.100.9' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:12.447 192.168.100.9' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:12.447 192.168.100.9' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:12.447 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=197953 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 197953 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 197953 ']' 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.448 00:52:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.448 [2024-11-19 00:52:18.472509] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:12.448 [2024-11-19 00:52:18.472601] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.448 [2024-11-19 00:52:18.600273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.448 [2024-11-19 00:52:18.706910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.448 [2024-11-19 00:52:18.706960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.448 [2024-11-19 00:52:18.706969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.448 [2024-11-19 00:52:18.706980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.448 [2024-11-19 00:52:18.706987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.448 [2024-11-19 00:52:18.709396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.448 [2024-11-19 00:52:18.709472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.448 [2024-11-19 00:52:18.709488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.448 [2024-11-19 00:52:18.709514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.706 [2024-11-19 00:52:19.359055] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000029440/0x617000007c40) succeed. 00:08:12.706 [2024-11-19 00:52:19.368571] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x6120000295c0/0x617000007fc0) succeed. 00:08:12.706 [2024-11-19 00:52:19.368601] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.706 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.965 Malloc0 00:08:12.965 [2024-11-19 00:52:19.503472] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=198220 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 198220 /var/tmp/bdevperf.sock 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 198220 ']' 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.965 { 00:08:12.965 "params": { 00:08:12.965 "name": "Nvme$subsystem", 00:08:12.965 "trtype": "$TEST_TRANSPORT", 00:08:12.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.965 "adrfam": "ipv4", 00:08:12.965 "trsvcid": "$NVMF_PORT", 00:08:12.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.965 "hdgst": ${hdgst:-false}, 00:08:12.965 "ddgst": ${ddgst:-false} 00:08:12.965 }, 00:08:12.965 "method": "bdev_nvme_attach_controller" 00:08:12.965 } 00:08:12.965 EOF 00:08:12.965 )") 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:12.965 00:52:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.965 "params": { 00:08:12.965 "name": "Nvme0", 00:08:12.965 "trtype": "rdma", 00:08:12.965 "traddr": "192.168.100.8", 00:08:12.965 "adrfam": "ipv4", 00:08:12.965 "trsvcid": "4420", 00:08:12.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:12.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:12.965 "hdgst": false, 00:08:12.965 "ddgst": false 00:08:12.965 }, 00:08:12.965 "method": "bdev_nvme_attach_controller" 00:08:12.965 }' 00:08:12.965 [2024-11-19 00:52:19.622681] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:12.965 [2024-11-19 00:52:19.622767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198220 ] 00:08:13.225 [2024-11-19 00:52:19.744954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.225 [2024-11-19 00:52:19.864784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.793 Running I/O for 10 seconds... 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.793 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=382 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 382 -ge 100 ']' 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.052 00:52:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:14.623 [2024-11-19 00:52:21.075326] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:14.623 [2024-11-19 00:52:21.075391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff00 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcfe40 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfd80 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafcc0 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fc00 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fb40 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fa80 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6f9c0 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5f900 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4f840 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3f780 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2f6c0 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f600 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f540 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff480 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef3c0 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf300 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf240 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf180 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf0c0 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f000 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8ef40 len:0x10000 key:0x538e4ce1 00:08:14.623 [2024-11-19 00:52:21.075887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.623 [2024-11-19 00:52:21.075898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7ee80 len:0x10000 key:0x538e4ce1 00:08:14.624 [2024-11-19 00:52:21.075908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.075920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6edc0 len:0x10000 key:0x538e4ce1 00:08:14.624 [2024-11-19 00:52:21.075929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.075940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5ed00 len:0x10000 key:0x538e4ce1 00:08:14.624 [2024-11-19 00:52:21.075953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.075964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4ec40 len:0x10000 key:0x538e4ce1 00:08:14.624 [2024-11-19 00:52:21.075976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.075987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3eb80 len:0x10000 key:0x538e4ce1 00:08:14.624 [2024-11-19 00:52:21.075997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a92f000 len:0x10000 key:0x187bc09e 00:08:14.624 [2024-11-19 00:52:21.076018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a950000 len:0x10000 key:0x187bc09e 00:08:14.624 [2024-11-19 00:52:21.076039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a971000 len:0x10000 key:0x187bc09e 00:08:14.624 [2024-11-19 00:52:21.076062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a992000 len:0x10000 key:0x187bc09e 00:08:14.624 [2024-11-19 00:52:21.076083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a9b3000 len:0x10000 key:0x187bc09e 00:08:14.624 [2024-11-19 00:52:21.076105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2eac0 len:0x10000 key:0x538e4ce1 00:08:14.624 [2024-11-19 00:52:21.076126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1ea00 len:0x10000 key:0x538e4ce1 00:08:14.624 [2024-11-19 00:52:21.076148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0e940 len:0x10000 key:0x538e4ce1 00:08:14.624 [2024-11-19 00:52:21.076169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000deffc0 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff00 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcfe40 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfd80 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafcc0 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fc00 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fb40 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fa80 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6f9c0 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5f900 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d4f840 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d3f780 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d2f6c0 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d1f600 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d0f540 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cff480 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cef3c0 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cdf300 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ccf240 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cbf180 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000caf0c0 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c9f000 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c8ef40 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.624 [2024-11-19 00:52:21.076687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c7ee80 len:0x10000 key:0x14a48811 00:08:14.624 [2024-11-19 00:52:21.076697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.625 [2024-11-19 00:52:21.076709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c6edc0 len:0x10000 key:0x14a48811 00:08:14.625 [2024-11-19 00:52:21.076720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.625 [2024-11-19 00:52:21.076731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c5ed00 len:0x10000 key:0x14a48811 00:08:14.625 [2024-11-19 00:52:21.076741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.625 [2024-11-19 00:52:21.076755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4ec40 len:0x10000 key:0x14a48811 00:08:14.625 [2024-11-19 00:52:21.076764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.625 [2024-11-19 00:52:21.076775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3eb80 len:0x10000 key:0x14a48811 00:08:14.625 [2024-11-19 00:52:21.076784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.625 [2024-11-19 00:52:21.076796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2eac0 len:0x10000 key:0x14a48811 00:08:14.625 [2024-11-19 00:52:21.076806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.625 [2024-11-19 00:52:21.078473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:14.625 task offset: 65536 on job bdev=Nvme0n1 fails 00:08:14.625 00:08:14.625 Latency(us) 00:08:14.625 [2024-11-18T23:52:21.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.625 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:14.625 Job: Nvme0n1 ended in about 0.75 seconds with error 00:08:14.625 Verification LBA range: start 0x0 length 0x400 00:08:14.625 Nvme0n1 : 0.75 675.69 42.23 85.29 0.00 82906.50 2184.53 563235.11 00:08:14.625 [2024-11-18T23:52:21.318Z] =================================================================================================================== 00:08:14.625 [2024-11-18T23:52:21.318Z] Total : 675.69 42.23 85.29 0.00 82906.50 2184.53 563235.11 00:08:14.625 [2024-11-19 00:52:21.095170] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.625 [2024-11-19 00:52:21.095209] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:14.625 [2024-11-19 00:52:21.127706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:14.625 [2024-11-19 00:52:21.148040] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 198220 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:14.884 { 00:08:14.884 "params": { 00:08:14.884 "name": "Nvme$subsystem", 00:08:14.884 "trtype": "$TEST_TRANSPORT", 00:08:14.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.884 "adrfam": "ipv4", 00:08:14.884 "trsvcid": "$NVMF_PORT", 00:08:14.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.884 "hdgst": ${hdgst:-false}, 00:08:14.884 "ddgst": ${ddgst:-false} 00:08:14.884 }, 00:08:14.884 "method": "bdev_nvme_attach_controller" 00:08:14.884 } 00:08:14.884 EOF 00:08:14.884 )") 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:14.884 00:52:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:14.884 "params": { 00:08:14.884 "name": "Nvme0", 00:08:14.884 "trtype": "rdma", 00:08:14.884 "traddr": "192.168.100.8", 00:08:14.884 "adrfam": "ipv4", 00:08:14.884 "trsvcid": "4420", 00:08:14.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:14.884 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:14.884 "hdgst": false, 00:08:14.884 "ddgst": false 00:08:14.884 }, 00:08:14.884 "method": "bdev_nvme_attach_controller" 00:08:14.884 }' 00:08:15.144 [2024-11-19 00:52:21.615530] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:15.144 [2024-11-19 00:52:21.615612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198467 ] 00:08:15.144 [2024-11-19 00:52:21.738682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.403 [2024-11-19 00:52:21.855067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.662 Running I/O for 1 seconds... 00:08:17.042 2746.00 IOPS, 171.62 MiB/s 00:08:17.043 Latency(us) 00:08:17.043 [2024-11-18T23:52:23.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.043 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:17.043 Verification LBA range: start 0x0 length 0x400 00:08:17.043 Nvme0n1 : 1.02 2762.95 172.68 0.00 0.00 22684.06 1997.29 35701.52 00:08:17.043 [2024-11-18T23:52:23.736Z] =================================================================================================================== 00:08:17.043 [2024-11-18T23:52:23.736Z] Total : 2762.95 172.68 0.00 0.00 22684.06 1997.29 35701.52 00:08:17.612 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 198220 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.612 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:17.612 rmmod nvme_rdma 00:08:17.612 rmmod nvme_fabrics 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 197953 ']' 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 197953 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 197953 ']' 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 197953 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197953 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197953' 00:08:17.872 killing process with pid 197953 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 197953 00:08:17.872 00:52:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 197953 00:08:19.252 [2024-11-19 00:52:25.685632] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:19.252 00:08:19.252 real 0m13.436s 00:08:19.252 user 0m33.607s 00:08:19.252 sys 0m5.539s 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 ************************************ 00:08:19.252 END TEST nvmf_host_management 00:08:19.252 ************************************ 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 ************************************ 00:08:19.252 START TEST nvmf_lvol 00:08:19.252 ************************************ 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:19.252 * Looking for test storage... 00:08:19.252 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.252 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.512 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.512 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.513 --rc genhtml_branch_coverage=1 00:08:19.513 --rc genhtml_function_coverage=1 00:08:19.513 --rc genhtml_legend=1 00:08:19.513 --rc geninfo_all_blocks=1 00:08:19.513 --rc geninfo_unexecuted_blocks=1 00:08:19.513 00:08:19.513 ' 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.513 --rc genhtml_branch_coverage=1 00:08:19.513 --rc genhtml_function_coverage=1 00:08:19.513 --rc genhtml_legend=1 00:08:19.513 --rc geninfo_all_blocks=1 00:08:19.513 --rc geninfo_unexecuted_blocks=1 00:08:19.513 00:08:19.513 ' 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.513 --rc genhtml_branch_coverage=1 00:08:19.513 --rc genhtml_function_coverage=1 00:08:19.513 --rc genhtml_legend=1 00:08:19.513 --rc geninfo_all_blocks=1 00:08:19.513 --rc geninfo_unexecuted_blocks=1 00:08:19.513 00:08:19.513 ' 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.513 --rc genhtml_branch_coverage=1 00:08:19.513 --rc genhtml_function_coverage=1 00:08:19.513 --rc genhtml_legend=1 00:08:19.513 --rc geninfo_all_blocks=1 00:08:19.513 --rc geninfo_unexecuted_blocks=1 00:08:19.513 00:08:19.513 ' 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.513 00:52:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.513 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.514 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.514 00:52:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:26.101 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:26.102 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:26.102 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@405 -- # modinfo irdma 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:26.102 Found net devices under 0000:af:00.0: cvl_0_0 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:26.102 Found net devices under 0000:af:00.1: cvl_0_1 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:26.102 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:08:26.102 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:26.102 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:08:26.102 altname enp175s0f0np0 00:08:26.102 altname ens801f0np0 00:08:26.102 inet 192.168.100.8/24 scope global cvl_0_0 00:08:26.102 valid_lft forever preferred_lft forever 00:08:26.102 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:08:26.102 valid_lft forever preferred_lft forever 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:08:26.103 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:26.103 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:08:26.103 altname enp175s0f1np1 00:08:26.103 altname ens801f1np1 00:08:26.103 inet 192.168.100.9/24 scope global cvl_0_1 00:08:26.103 valid_lft forever preferred_lft forever 00:08:26.103 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:08:26.103 valid_lft forever preferred_lft forever 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:26.103 192.168.100.9' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:26.103 192.168.100.9' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:26.103 192.168.100.9' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=202358 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 202358 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 202358 ']' 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.103 00:52:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:26.103 [2024-11-19 00:52:31.940943] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:26.103 [2024-11-19 00:52:31.941035] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.103 [2024-11-19 00:52:32.066263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:26.103 [2024-11-19 00:52:32.176691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.103 [2024-11-19 00:52:32.176738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.103 [2024-11-19 00:52:32.176749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.103 [2024-11-19 00:52:32.176761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.103 [2024-11-19 00:52:32.176769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.103 [2024-11-19 00:52:32.179162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.103 [2024-11-19 00:52:32.179228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.103 [2024-11-19 00:52:32.179249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.103 00:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.103 00:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:26.103 00:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.103 00:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:26.103 00:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:26.103 00:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.103 00:52:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:26.363 [2024-11-19 00:52:32.962256] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000028fc0/0x617000007c40) succeed. 00:08:26.363 [2024-11-19 00:52:32.971662] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029140/0x617000007fc0) succeed. 00:08:26.363 [2024-11-19 00:52:32.971691] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:08:26.363 00:52:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:26.622 00:52:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:26.622 00:52:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:26.881 00:52:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:26.881 00:52:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:27.141 00:52:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:27.400 00:52:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f9e99231-66a4-4fc2-aef7-164ee750af18 00:08:27.400 00:52:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9e99231-66a4-4fc2-aef7-164ee750af18 lvol 20 00:08:27.659 00:52:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d88e6e25-6d37-4396-a652-199c3912dcc9 00:08:27.659 00:52:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:27.919 00:52:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d88e6e25-6d37-4396-a652-199c3912dcc9 00:08:27.919 00:52:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:28.178 [2024-11-19 00:52:34.762324] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:28.178 00:52:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:28.438 00:52:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=202910 00:08:28.438 00:52:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:28.438 00:52:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:29.376 00:52:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d88e6e25-6d37-4396-a652-199c3912dcc9 MY_SNAPSHOT 00:08:29.636 00:52:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6fe9fe62-fefd-4544-a4f6-da2f4c0d78d8 00:08:29.636 00:52:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d88e6e25-6d37-4396-a652-199c3912dcc9 30 00:08:29.895 00:52:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6fe9fe62-fefd-4544-a4f6-da2f4c0d78d8 MY_CLONE 00:08:30.154 00:52:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9bbeba57-f63a-4c9a-9c5c-652626d807f0 00:08:30.154 00:52:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9bbeba57-f63a-4c9a-9c5c-652626d807f0 00:08:30.413 00:52:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 202910 00:08:40.396 Initializing NVMe Controllers 00:08:40.396 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:40.396 Controller IO queue size 128, less than required. 00:08:40.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:40.396 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:40.396 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:40.396 Initialization complete. Launching workers. 00:08:40.396 ======================================================== 00:08:40.396 Latency(us) 00:08:40.396 Device Information : IOPS MiB/s Average min max 00:08:40.396 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15331.60 59.89 8349.55 2100.12 136692.98 00:08:40.396 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15376.20 60.06 8324.69 101.40 150372.52 00:08:40.396 ======================================================== 00:08:40.396 Total : 30707.80 119.95 8337.10 101.40 150372.52 00:08:40.396 00:08:40.396 00:52:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.397 00:52:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d88e6e25-6d37-4396-a652-199c3912dcc9 00:08:40.397 00:52:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9e99231-66a4-4fc2-aef7-164ee750af18 00:08:40.397 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:40.397 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:40.397 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:40.397 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:40.397 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:40.397 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:40.397 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:40.397 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:40.397 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.397 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:40.397 rmmod nvme_rdma 00:08:40.397 rmmod nvme_fabrics 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 202358 ']' 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 202358 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 202358 ']' 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 202358 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 202358 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 202358' 00:08:40.657 killing process with pid 202358 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 202358 00:08:40.657 00:52:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 202358 00:08:42.037 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.037 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:42.037 00:08:42.037 real 0m22.890s 00:08:42.037 user 1m15.821s 00:08:42.037 sys 0m5.595s 00:08:42.037 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.037 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.037 ************************************ 00:08:42.037 END TEST nvmf_lvol 00:08:42.037 ************************************ 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.298 ************************************ 00:08:42.298 START TEST nvmf_lvs_grow 00:08:42.298 ************************************ 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:42.298 * Looking for test storage... 00:08:42.298 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.298 --rc genhtml_branch_coverage=1 00:08:42.298 --rc genhtml_function_coverage=1 00:08:42.298 --rc genhtml_legend=1 00:08:42.298 --rc geninfo_all_blocks=1 00:08:42.298 --rc geninfo_unexecuted_blocks=1 00:08:42.298 00:08:42.298 ' 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.298 --rc genhtml_branch_coverage=1 00:08:42.298 --rc genhtml_function_coverage=1 00:08:42.298 --rc genhtml_legend=1 00:08:42.298 --rc geninfo_all_blocks=1 00:08:42.298 --rc geninfo_unexecuted_blocks=1 00:08:42.298 00:08:42.298 ' 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.298 --rc genhtml_branch_coverage=1 00:08:42.298 --rc genhtml_function_coverage=1 00:08:42.298 --rc genhtml_legend=1 00:08:42.298 --rc geninfo_all_blocks=1 00:08:42.298 --rc geninfo_unexecuted_blocks=1 00:08:42.298 00:08:42.298 ' 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.298 --rc genhtml_branch_coverage=1 00:08:42.298 --rc genhtml_function_coverage=1 00:08:42.298 --rc genhtml_legend=1 00:08:42.298 --rc geninfo_all_blocks=1 00:08:42.298 --rc geninfo_unexecuted_blocks=1 00:08:42.298 00:08:42.298 ' 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.298 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.299 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.299 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.559 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:08:42.559 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:42.559 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:42.559 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:42.559 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.559 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.559 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.559 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.559 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.559 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.559 00:52:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.559 00:52:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.559 00:52:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.559 00:52:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.559 00:52:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:49.137 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:49.137 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@405 -- # modinfo irdma 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:49.137 Found net devices under 0000:af:00.0: cvl_0_0 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:49.137 Found net devices under 0000:af:00.1: cvl_0_1 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:49.137 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:08:49.138 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:49.138 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:08:49.138 altname enp175s0f0np0 00:08:49.138 altname ens801f0np0 00:08:49.138 inet 192.168.100.8/24 scope global cvl_0_0 00:08:49.138 valid_lft forever preferred_lft forever 00:08:49.138 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:08:49.138 valid_lft forever preferred_lft forever 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:08:49.138 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:49.138 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:08:49.138 altname enp175s0f1np1 00:08:49.138 altname ens801f1np1 00:08:49.138 inet 192.168.100.9/24 scope global cvl_0_1 00:08:49.138 valid_lft forever preferred_lft forever 00:08:49.138 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:08:49.138 valid_lft forever preferred_lft forever 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo cvl_0_0 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo cvl_0_1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:49.138 192.168.100.9' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:49.138 192.168.100.9' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:49.138 192.168.100.9' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=208233 00:08:49.138 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 208233 00:08:49.139 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:49.139 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 208233 ']' 00:08:49.139 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.139 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.139 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.139 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.139 00:52:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.139 [2024-11-19 00:52:54.943220] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:49.139 [2024-11-19 00:52:54.943340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.139 [2024-11-19 00:52:55.070709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.139 [2024-11-19 00:52:55.173777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.139 [2024-11-19 00:52:55.173825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.139 [2024-11-19 00:52:55.173835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.139 [2024-11-19 00:52:55.173860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.139 [2024-11-19 00:52:55.173869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.139 [2024-11-19 00:52:55.175255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.139 00:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.139 00:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:49.139 00:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.139 00:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.139 00:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.139 00:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.139 00:52:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:49.399 [2024-11-19 00:52:55.968434] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000289c0/0x617000007fc0) succeed. 00:08:49.399 [2024-11-19 00:52:55.977589] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000028b40/0x617000008340) succeed. 00:08:49.399 [2024-11-19 00:52:55.977617] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.399 ************************************ 00:08:49.399 START TEST lvs_grow_clean 00:08:49.399 ************************************ 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.399 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:49.658 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:49.658 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:49.917 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=86a428d7-37b5-4043-ae95-b70254553cc7 00:08:49.917 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a428d7-37b5-4043-ae95-b70254553cc7 00:08:49.917 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:50.177 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:50.177 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:50.177 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86a428d7-37b5-4043-ae95-b70254553cc7 lvol 150 00:08:50.177 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6480b3a8-27aa-4d0f-8c96-9dabff9099a5 00:08:50.177 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:50.177 00:52:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:50.436 [2024-11-19 00:52:57.020848] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:50.436 [2024-11-19 00:52:57.020935] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:50.436 true 00:08:50.436 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a428d7-37b5-4043-ae95-b70254553cc7 00:08:50.436 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:50.695 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:50.695 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:50.955 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6480b3a8-27aa-4d0f-8c96-9dabff9099a5 00:08:50.955 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:51.214 [2024-11-19 00:52:57.771158] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:51.214 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:51.474 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=208856 00:08:51.474 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:51.474 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:51.474 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 208856 /var/tmp/bdevperf.sock 00:08:51.474 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 208856 ']' 00:08:51.474 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:51.474 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.474 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:51.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:51.474 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.474 00:52:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:51.474 [2024-11-19 00:52:58.040355] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:51.474 [2024-11-19 00:52:58.040448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid208856 ] 00:08:51.474 [2024-11-19 00:52:58.148540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.733 [2024-11-19 00:52:58.257470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.303 00:52:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.303 00:52:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:52.303 00:52:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:52.562 Nvme0n1 00:08:52.562 00:52:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:52.821 [ 00:08:52.821 { 00:08:52.821 "name": "Nvme0n1", 00:08:52.821 "aliases": [ 00:08:52.821 "6480b3a8-27aa-4d0f-8c96-9dabff9099a5" 00:08:52.821 ], 00:08:52.821 "product_name": "NVMe disk", 00:08:52.821 "block_size": 4096, 00:08:52.821 "num_blocks": 38912, 00:08:52.821 "uuid": "6480b3a8-27aa-4d0f-8c96-9dabff9099a5", 00:08:52.821 "numa_id": 1, 00:08:52.821 "assigned_rate_limits": { 00:08:52.821 "rw_ios_per_sec": 0, 00:08:52.821 "rw_mbytes_per_sec": 0, 00:08:52.821 "r_mbytes_per_sec": 0, 00:08:52.821 "w_mbytes_per_sec": 0 00:08:52.821 }, 00:08:52.821 "claimed": false, 00:08:52.821 "zoned": false, 00:08:52.821 "supported_io_types": { 00:08:52.821 "read": true, 00:08:52.821 "write": true, 00:08:52.821 "unmap": true, 00:08:52.821 "flush": true, 00:08:52.821 "reset": true, 00:08:52.821 "nvme_admin": true, 00:08:52.821 "nvme_io": true, 00:08:52.821 "nvme_io_md": false, 00:08:52.821 "write_zeroes": true, 00:08:52.821 "zcopy": false, 00:08:52.821 "get_zone_info": false, 00:08:52.821 "zone_management": false, 00:08:52.821 "zone_append": false, 00:08:52.821 "compare": true, 00:08:52.821 "compare_and_write": true, 00:08:52.821 "abort": true, 00:08:52.821 "seek_hole": false, 00:08:52.821 "seek_data": false, 00:08:52.821 "copy": true, 00:08:52.821 "nvme_iov_md": false 00:08:52.821 }, 00:08:52.821 "memory_domains": [ 00:08:52.821 { 00:08:52.821 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:52.821 "dma_device_type": 0 00:08:52.821 } 00:08:52.821 ], 00:08:52.821 "driver_specific": { 00:08:52.821 "nvme": [ 00:08:52.821 { 00:08:52.821 "trid": { 00:08:52.821 "trtype": "RDMA", 00:08:52.821 "adrfam": "IPv4", 00:08:52.822 "traddr": "192.168.100.8", 00:08:52.822 "trsvcid": "4420", 00:08:52.822 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:52.822 }, 00:08:52.822 "ctrlr_data": { 00:08:52.822 "cntlid": 1, 00:08:52.822 "vendor_id": "0x8086", 00:08:52.822 "model_number": "SPDK bdev Controller", 00:08:52.822 "serial_number": "SPDK0", 00:08:52.822 "firmware_revision": "25.01", 00:08:52.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:52.822 "oacs": { 00:08:52.822 "security": 0, 00:08:52.822 "format": 0, 00:08:52.822 "firmware": 0, 00:08:52.822 "ns_manage": 0 00:08:52.822 }, 00:08:52.822 "multi_ctrlr": true, 00:08:52.822 "ana_reporting": false 00:08:52.822 }, 00:08:52.822 "vs": { 00:08:52.822 "nvme_version": "1.3" 00:08:52.822 }, 00:08:52.822 "ns_data": { 00:08:52.822 "id": 1, 00:08:52.822 "can_share": true 00:08:52.822 } 00:08:52.822 } 00:08:52.822 ], 00:08:52.822 "mp_policy": "active_passive" 00:08:52.822 } 00:08:52.822 } 00:08:52.822 ] 00:08:52.822 00:52:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:52.822 00:52:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=209080 00:08:52.822 00:52:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:52.822 Running I/O for 10 seconds... 00:08:53.759 Latency(us) 00:08:53.759 [2024-11-18T23:53:00.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.760 Nvme0n1 : 1.00 29825.00 116.50 0.00 0.00 0.00 0.00 0.00 00:08:53.760 [2024-11-18T23:53:00.453Z] =================================================================================================================== 00:08:53.760 [2024-11-18T23:53:00.453Z] Total : 29825.00 116.50 0.00 0.00 0.00 0.00 0.00 00:08:53.760 00:08:54.697 00:53:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86a428d7-37b5-4043-ae95-b70254553cc7 00:08:54.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.697 Nvme0n1 : 2.00 30259.00 118.20 0.00 0.00 0.00 0.00 0.00 00:08:54.697 [2024-11-18T23:53:01.390Z] =================================================================================================================== 00:08:54.697 [2024-11-18T23:53:01.390Z] Total : 30259.00 118.20 0.00 0.00 0.00 0.00 0.00 00:08:54.697 00:08:54.957 true 00:08:54.957 00:53:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a428d7-37b5-4043-ae95-b70254553cc7 00:08:54.957 00:53:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:55.216 00:53:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:55.216 00:53:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:55.216 00:53:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 209080 00:08:55.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.783 Nvme0n1 : 3.00 30435.33 118.89 0.00 0.00 0.00 0.00 0.00 00:08:55.783 [2024-11-18T23:53:02.476Z] =================================================================================================================== 00:08:55.783 [2024-11-18T23:53:02.476Z] Total : 30435.33 118.89 0.00 0.00 0.00 0.00 0.00 00:08:55.783 00:08:56.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.721 Nvme0n1 : 4.00 30611.00 119.57 0.00 0.00 0.00 0.00 0.00 00:08:56.721 [2024-11-18T23:53:03.414Z] =================================================================================================================== 00:08:56.721 [2024-11-18T23:53:03.414Z] Total : 30611.00 119.57 0.00 0.00 0.00 0.00 0.00 00:08:56.721 00:08:58.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.100 Nvme0n1 : 5.00 30722.00 120.01 0.00 0.00 0.00 0.00 0.00 00:08:58.100 [2024-11-18T23:53:04.793Z] =================================================================================================================== 00:08:58.100 [2024-11-18T23:53:04.793Z] Total : 30722.00 120.01 0.00 0.00 0.00 0.00 0.00 00:08:58.100 00:08:59.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.037 Nvme0n1 : 6.00 30791.33 120.28 0.00 0.00 0.00 0.00 0.00 00:08:59.037 [2024-11-18T23:53:05.730Z] =================================================================================================================== 00:08:59.037 [2024-11-18T23:53:05.730Z] Total : 30791.33 120.28 0.00 0.00 0.00 0.00 0.00 00:08:59.037 00:08:59.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.975 Nvme0n1 : 7.00 30854.14 120.52 0.00 0.00 0.00 0.00 0.00 00:08:59.975 [2024-11-18T23:53:06.668Z] =================================================================================================================== 00:08:59.975 [2024-11-18T23:53:06.668Z] Total : 30854.14 120.52 0.00 0.00 0.00 0.00 0.00 00:08:59.975 00:09:00.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.913 Nvme0n1 : 8.00 30907.12 120.73 0.00 0.00 0.00 0.00 0.00 00:09:00.913 [2024-11-18T23:53:07.606Z] =================================================================================================================== 00:09:00.913 [2024-11-18T23:53:07.606Z] Total : 30907.12 120.73 0.00 0.00 0.00 0.00 0.00 00:09:00.913 00:09:01.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.850 Nvme0n1 : 9.00 30946.67 120.89 0.00 0.00 0.00 0.00 0.00 00:09:01.850 [2024-11-18T23:53:08.543Z] =================================================================================================================== 00:09:01.850 [2024-11-18T23:53:08.543Z] Total : 30946.67 120.89 0.00 0.00 0.00 0.00 0.00 00:09:01.850 00:09:02.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.788 Nvme0n1 : 10.00 30972.00 120.98 0.00 0.00 0.00 0.00 0.00 00:09:02.788 [2024-11-18T23:53:09.481Z] =================================================================================================================== 00:09:02.788 [2024-11-18T23:53:09.481Z] Total : 30972.00 120.98 0.00 0.00 0.00 0.00 0.00 00:09:02.788 00:09:02.788 00:09:02.788 Latency(us) 00:09:02.788 [2024-11-18T23:53:09.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.788 Nvme0n1 : 10.00 30972.25 120.99 0.00 0.00 4129.50 2730.67 20597.03 00:09:02.788 [2024-11-18T23:53:09.481Z] =================================================================================================================== 00:09:02.788 [2024-11-18T23:53:09.481Z] Total : 30972.25 120.99 0.00 0.00 4129.50 2730.67 20597.03 00:09:02.788 { 00:09:02.788 "results": [ 00:09:02.788 { 00:09:02.788 "job": "Nvme0n1", 00:09:02.788 "core_mask": "0x2", 00:09:02.788 "workload": "randwrite", 00:09:02.788 "status": "finished", 00:09:02.788 "queue_depth": 128, 00:09:02.788 "io_size": 4096, 00:09:02.788 "runtime": 10.003633, 00:09:02.788 "iops": 30972.247782380662, 00:09:02.788 "mibps": 120.98534289992446, 00:09:02.788 "io_failed": 0, 00:09:02.788 "io_timeout": 0, 00:09:02.788 "avg_latency_us": 4129.498496093543, 00:09:02.788 "min_latency_us": 2730.6666666666665, 00:09:02.788 "max_latency_us": 20597.02857142857 00:09:02.788 } 00:09:02.788 ], 00:09:02.788 "core_count": 1 00:09:02.788 } 00:09:02.788 00:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 208856 00:09:02.788 00:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 208856 ']' 00:09:02.788 00:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 208856 00:09:02.788 00:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:02.788 00:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.788 00:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 208856 00:09:03.048 00:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:03.049 00:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:03.049 00:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 208856' 00:09:03.049 killing process with pid 208856 00:09:03.049 00:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 208856 00:09:03.049 Received shutdown signal, test time was about 10.000000 seconds 00:09:03.049 00:09:03.049 Latency(us) 00:09:03.049 [2024-11-18T23:53:09.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.049 [2024-11-18T23:53:09.742Z] =================================================================================================================== 00:09:03.049 [2024-11-18T23:53:09.742Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:03.049 00:53:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 208856 00:09:03.986 00:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:03.986 00:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:04.245 00:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a428d7-37b5-4043-ae95-b70254553cc7 00:09:04.245 00:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:04.504 00:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:04.504 00:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:04.504 00:53:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:04.504 [2024-11-19 00:53:11.133750] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:04.504 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a428d7-37b5-4043-ae95-b70254553cc7 00:09:04.504 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:04.504 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a428d7-37b5-4043-ae95-b70254553cc7 00:09:04.504 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:04.504 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.504 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:04.504 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.505 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:04.505 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.505 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:04.505 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:09:04.505 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a428d7-37b5-4043-ae95-b70254553cc7 00:09:04.764 request: 00:09:04.764 { 00:09:04.764 "uuid": "86a428d7-37b5-4043-ae95-b70254553cc7", 00:09:04.764 "method": "bdev_lvol_get_lvstores", 00:09:04.764 "req_id": 1 00:09:04.764 } 00:09:04.764 Got JSON-RPC error response 00:09:04.764 response: 00:09:04.764 { 00:09:04.764 "code": -19, 00:09:04.764 "message": "No such device" 00:09:04.764 } 00:09:04.764 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:04.764 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:04.764 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:04.764 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:04.764 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:05.023 aio_bdev 00:09:05.023 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6480b3a8-27aa-4d0f-8c96-9dabff9099a5 00:09:05.024 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6480b3a8-27aa-4d0f-8c96-9dabff9099a5 00:09:05.024 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.024 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:05.024 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.024 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.024 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:05.024 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6480b3a8-27aa-4d0f-8c96-9dabff9099a5 -t 2000 00:09:05.283 [ 00:09:05.283 { 00:09:05.283 "name": "6480b3a8-27aa-4d0f-8c96-9dabff9099a5", 00:09:05.283 "aliases": [ 00:09:05.283 "lvs/lvol" 00:09:05.283 ], 00:09:05.283 "product_name": "Logical Volume", 00:09:05.283 "block_size": 4096, 00:09:05.283 "num_blocks": 38912, 00:09:05.283 "uuid": "6480b3a8-27aa-4d0f-8c96-9dabff9099a5", 00:09:05.283 "assigned_rate_limits": { 00:09:05.283 "rw_ios_per_sec": 0, 00:09:05.283 "rw_mbytes_per_sec": 0, 00:09:05.283 "r_mbytes_per_sec": 0, 00:09:05.283 "w_mbytes_per_sec": 0 00:09:05.283 }, 00:09:05.283 "claimed": false, 00:09:05.283 "zoned": false, 00:09:05.283 "supported_io_types": { 00:09:05.283 "read": true, 00:09:05.283 "write": true, 00:09:05.283 "unmap": true, 00:09:05.283 "flush": false, 00:09:05.283 "reset": true, 00:09:05.283 "nvme_admin": false, 00:09:05.283 "nvme_io": false, 00:09:05.283 "nvme_io_md": false, 00:09:05.283 "write_zeroes": true, 00:09:05.283 "zcopy": false, 00:09:05.283 "get_zone_info": false, 00:09:05.283 "zone_management": false, 00:09:05.283 "zone_append": false, 00:09:05.283 "compare": false, 00:09:05.283 "compare_and_write": false, 00:09:05.283 "abort": false, 00:09:05.283 "seek_hole": true, 00:09:05.283 "seek_data": true, 00:09:05.283 "copy": false, 00:09:05.283 "nvme_iov_md": false 00:09:05.283 }, 00:09:05.283 "driver_specific": { 00:09:05.283 "lvol": { 00:09:05.283 "lvol_store_uuid": "86a428d7-37b5-4043-ae95-b70254553cc7", 00:09:05.283 "base_bdev": "aio_bdev", 00:09:05.283 "thin_provision": false, 00:09:05.283 "num_allocated_clusters": 38, 00:09:05.283 "snapshot": false, 00:09:05.283 "clone": false, 00:09:05.283 "esnap_clone": false 00:09:05.283 } 00:09:05.283 } 00:09:05.283 } 00:09:05.283 ] 00:09:05.283 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:05.283 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a428d7-37b5-4043-ae95-b70254553cc7 00:09:05.283 00:53:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:05.543 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:05.543 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a428d7-37b5-4043-ae95-b70254553cc7 00:09:05.543 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:05.802 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:05.802 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6480b3a8-27aa-4d0f-8c96-9dabff9099a5 00:09:05.802 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86a428d7-37b5-4043-ae95-b70254553cc7 00:09:06.061 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:06.320 00:09:06.320 real 0m16.823s 00:09:06.320 user 0m16.941s 00:09:06.320 sys 0m1.059s 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:06.320 ************************************ 00:09:06.320 END TEST lvs_grow_clean 00:09:06.320 ************************************ 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.320 ************************************ 00:09:06.320 START TEST lvs_grow_dirty 00:09:06.320 ************************************ 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:06.320 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:06.321 00:53:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.580 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:06.580 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:06.840 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:06.840 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:06.840 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:07.099 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:07.099 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:07.099 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 lvol 150 00:09:07.099 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=dc873638-37ea-475a-8c26-d20a1068dbc9 00:09:07.099 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:07.099 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:07.358 [2024-11-19 00:53:13.932817] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:07.358 [2024-11-19 00:53:13.932902] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:07.358 true 00:09:07.358 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:07.358 00:53:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:07.617 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:07.617 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:07.876 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc873638-37ea-475a-8c26-d20a1068dbc9 00:09:07.876 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:08.135 [2024-11-19 00:53:14.695142] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:08.135 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:08.394 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:08.394 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=211758 00:09:08.394 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:08.394 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 211758 /var/tmp/bdevperf.sock 00:09:08.394 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 211758 ']' 00:09:08.394 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:08.394 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.394 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:08.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:08.394 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.394 00:53:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:08.394 [2024-11-19 00:53:14.956126] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:08.394 [2024-11-19 00:53:14.956216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211758 ] 00:09:08.394 [2024-11-19 00:53:15.081169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.651 [2024-11-19 00:53:15.188236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.219 00:53:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.219 00:53:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:09.219 00:53:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:09.478 Nvme0n1 00:09:09.478 00:53:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:09.737 [ 00:09:09.737 { 00:09:09.737 "name": "Nvme0n1", 00:09:09.737 "aliases": [ 00:09:09.737 "dc873638-37ea-475a-8c26-d20a1068dbc9" 00:09:09.737 ], 00:09:09.737 "product_name": "NVMe disk", 00:09:09.737 "block_size": 4096, 00:09:09.737 "num_blocks": 38912, 00:09:09.737 "uuid": "dc873638-37ea-475a-8c26-d20a1068dbc9", 00:09:09.737 "numa_id": 1, 00:09:09.737 "assigned_rate_limits": { 00:09:09.737 "rw_ios_per_sec": 0, 00:09:09.737 "rw_mbytes_per_sec": 0, 00:09:09.737 "r_mbytes_per_sec": 0, 00:09:09.737 "w_mbytes_per_sec": 0 00:09:09.737 }, 00:09:09.737 "claimed": false, 00:09:09.737 "zoned": false, 00:09:09.737 "supported_io_types": { 00:09:09.737 "read": true, 00:09:09.737 "write": true, 00:09:09.737 "unmap": true, 00:09:09.737 "flush": true, 00:09:09.737 "reset": true, 00:09:09.737 "nvme_admin": true, 00:09:09.737 "nvme_io": true, 00:09:09.737 "nvme_io_md": false, 00:09:09.737 "write_zeroes": true, 00:09:09.737 "zcopy": false, 00:09:09.737 "get_zone_info": false, 00:09:09.737 "zone_management": false, 00:09:09.737 "zone_append": false, 00:09:09.737 "compare": true, 00:09:09.737 "compare_and_write": true, 00:09:09.737 "abort": true, 00:09:09.737 "seek_hole": false, 00:09:09.737 "seek_data": false, 00:09:09.737 "copy": true, 00:09:09.737 "nvme_iov_md": false 00:09:09.737 }, 00:09:09.737 "memory_domains": [ 00:09:09.737 { 00:09:09.737 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:09.737 "dma_device_type": 0 00:09:09.737 } 00:09:09.737 ], 00:09:09.737 "driver_specific": { 00:09:09.737 "nvme": [ 00:09:09.737 { 00:09:09.737 "trid": { 00:09:09.737 "trtype": "RDMA", 00:09:09.737 "adrfam": "IPv4", 00:09:09.737 "traddr": "192.168.100.8", 00:09:09.737 "trsvcid": "4420", 00:09:09.737 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:09.737 }, 00:09:09.737 "ctrlr_data": { 00:09:09.737 "cntlid": 1, 00:09:09.737 "vendor_id": "0x8086", 00:09:09.737 "model_number": "SPDK bdev Controller", 00:09:09.737 "serial_number": "SPDK0", 00:09:09.737 "firmware_revision": "25.01", 00:09:09.737 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:09.737 "oacs": { 00:09:09.737 "security": 0, 00:09:09.737 "format": 0, 00:09:09.737 "firmware": 0, 00:09:09.737 "ns_manage": 0 00:09:09.737 }, 00:09:09.737 "multi_ctrlr": true, 00:09:09.737 "ana_reporting": false 00:09:09.737 }, 00:09:09.737 "vs": { 00:09:09.737 "nvme_version": "1.3" 00:09:09.737 }, 00:09:09.737 "ns_data": { 00:09:09.737 "id": 1, 00:09:09.737 "can_share": true 00:09:09.737 } 00:09:09.737 } 00:09:09.737 ], 00:09:09.737 "mp_policy": "active_passive" 00:09:09.737 } 00:09:09.737 } 00:09:09.737 ] 00:09:09.737 00:53:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=211989 00:09:09.737 00:53:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:09.737 00:53:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:09.737 Running I/O for 10 seconds... 00:09:10.673 Latency(us) 00:09:10.673 [2024-11-18T23:53:17.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.673 Nvme0n1 : 1.00 30305.00 118.38 0.00 0.00 0.00 0.00 0.00 00:09:10.673 [2024-11-18T23:53:17.366Z] =================================================================================================================== 00:09:10.673 [2024-11-18T23:53:17.366Z] Total : 30305.00 118.38 0.00 0.00 0.00 0.00 0.00 00:09:10.673 00:09:11.608 00:53:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:11.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.866 Nvme0n1 : 2.00 30656.00 119.75 0.00 0.00 0.00 0.00 0.00 00:09:11.866 [2024-11-18T23:53:18.559Z] =================================================================================================================== 00:09:11.866 [2024-11-18T23:53:18.559Z] Total : 30656.00 119.75 0.00 0.00 0.00 0.00 0.00 00:09:11.866 00:09:11.866 true 00:09:11.866 00:53:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:11.866 00:53:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:12.125 00:53:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:12.125 00:53:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:12.125 00:53:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 211989 00:09:12.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.690 Nvme0n1 : 3.00 30720.00 120.00 0.00 0.00 0.00 0.00 0.00 00:09:12.690 [2024-11-18T23:53:19.383Z] =================================================================================================================== 00:09:12.690 [2024-11-18T23:53:19.383Z] Total : 30720.00 120.00 0.00 0.00 0.00 0.00 0.00 00:09:12.690 00:09:14.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.066 Nvme0n1 : 4.00 30728.00 120.03 0.00 0.00 0.00 0.00 0.00 00:09:14.066 [2024-11-18T23:53:20.759Z] =================================================================================================================== 00:09:14.066 [2024-11-18T23:53:20.759Z] Total : 30728.00 120.03 0.00 0.00 0.00 0.00 0.00 00:09:14.066 00:09:15.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.001 Nvme0n1 : 5.00 30720.40 120.00 0.00 0.00 0.00 0.00 0.00 00:09:15.001 [2024-11-18T23:53:21.694Z] =================================================================================================================== 00:09:15.001 [2024-11-18T23:53:21.694Z] Total : 30720.40 120.00 0.00 0.00 0.00 0.00 0.00 00:09:15.001 00:09:15.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.937 Nvme0n1 : 6.00 30805.33 120.33 0.00 0.00 0.00 0.00 0.00 00:09:15.937 [2024-11-18T23:53:22.630Z] =================================================================================================================== 00:09:15.937 [2024-11-18T23:53:22.630Z] Total : 30805.33 120.33 0.00 0.00 0.00 0.00 0.00 00:09:15.937 00:09:16.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.872 Nvme0n1 : 7.00 30866.14 120.57 0.00 0.00 0.00 0.00 0.00 00:09:16.872 [2024-11-18T23:53:23.565Z] =================================================================================================================== 00:09:16.872 [2024-11-18T23:53:23.565Z] Total : 30866.14 120.57 0.00 0.00 0.00 0.00 0.00 00:09:16.872 00:09:17.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.808 Nvme0n1 : 8.00 30912.00 120.75 0.00 0.00 0.00 0.00 0.00 00:09:17.808 [2024-11-18T23:53:24.501Z] =================================================================================================================== 00:09:17.808 [2024-11-18T23:53:24.501Z] Total : 30912.00 120.75 0.00 0.00 0.00 0.00 0.00 00:09:17.808 00:09:18.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.745 Nvme0n1 : 9.00 30950.56 120.90 0.00 0.00 0.00 0.00 0.00 00:09:18.745 [2024-11-18T23:53:25.438Z] =================================================================================================================== 00:09:18.745 [2024-11-18T23:53:25.438Z] Total : 30950.56 120.90 0.00 0.00 0.00 0.00 0.00 00:09:18.745 00:09:19.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.682 Nvme0n1 : 10.00 30981.80 121.02 0.00 0.00 0.00 0.00 0.00 00:09:19.682 [2024-11-18T23:53:26.375Z] =================================================================================================================== 00:09:19.682 [2024-11-18T23:53:26.375Z] Total : 30981.80 121.02 0.00 0.00 0.00 0.00 0.00 00:09:19.682 00:09:19.682 00:09:19.682 Latency(us) 00:09:19.682 [2024-11-18T23:53:26.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.682 Nvme0n1 : 10.00 30980.08 121.02 0.00 0.00 4128.45 3011.54 17601.10 00:09:19.682 [2024-11-18T23:53:26.375Z] =================================================================================================================== 00:09:19.682 [2024-11-18T23:53:26.375Z] Total : 30980.08 121.02 0.00 0.00 4128.45 3011.54 17601.10 00:09:19.682 { 00:09:19.682 "results": [ 00:09:19.682 { 00:09:19.682 "job": "Nvme0n1", 00:09:19.682 "core_mask": "0x2", 00:09:19.682 "workload": "randwrite", 00:09:19.682 "status": "finished", 00:09:19.682 "queue_depth": 128, 00:09:19.682 "io_size": 4096, 00:09:19.682 "runtime": 10.003591, 00:09:19.682 "iops": 30980.07505504773, 00:09:19.682 "mibps": 121.0159181837802, 00:09:19.682 "io_failed": 0, 00:09:19.682 "io_timeout": 0, 00:09:19.682 "avg_latency_us": 4128.453559412872, 00:09:19.682 "min_latency_us": 3011.535238095238, 00:09:19.682 "max_latency_us": 17601.097142857143 00:09:19.682 } 00:09:19.682 ], 00:09:19.682 "core_count": 1 00:09:19.682 } 00:09:19.941 00:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 211758 00:09:19.941 00:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 211758 ']' 00:09:19.941 00:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 211758 00:09:19.941 00:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:19.941 00:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.941 00:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211758 00:09:19.941 00:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:19.941 00:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:19.941 00:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211758' 00:09:19.941 killing process with pid 211758 00:09:19.941 00:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 211758 00:09:19.941 Received shutdown signal, test time was about 10.000000 seconds 00:09:19.941 00:09:19.941 Latency(us) 00:09:19.941 [2024-11-18T23:53:26.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.941 [2024-11-18T23:53:26.634Z] =================================================================================================================== 00:09:19.941 [2024-11-18T23:53:26.634Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:19.941 00:53:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 211758 00:09:20.880 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:20.880 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:21.138 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:21.138 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 208233 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 208233 00:09:21.397 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 208233 Killed "${NVMF_APP[@]}" "$@" 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=213823 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 213823 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 213823 ']' 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.397 00:53:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:21.397 [2024-11-19 00:53:28.047520] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:21.397 [2024-11-19 00:53:28.047626] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.656 [2024-11-19 00:53:28.179584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.656 [2024-11-19 00:53:28.283954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.657 [2024-11-19 00:53:28.284003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.657 [2024-11-19 00:53:28.284013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.657 [2024-11-19 00:53:28.284024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.657 [2024-11-19 00:53:28.284031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.657 [2024-11-19 00:53:28.285577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.225 00:53:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.225 00:53:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:22.225 00:53:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:22.226 00:53:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:22.226 00:53:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.226 00:53:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.226 00:53:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:22.484 [2024-11-19 00:53:29.048737] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:22.484 [2024-11-19 00:53:29.048901] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:22.484 [2024-11-19 00:53:29.048936] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:22.484 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:22.484 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dc873638-37ea-475a-8c26-d20a1068dbc9 00:09:22.485 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=dc873638-37ea-475a-8c26-d20a1068dbc9 00:09:22.485 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.485 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:22.485 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.485 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.485 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:22.742 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc873638-37ea-475a-8c26-d20a1068dbc9 -t 2000 00:09:23.001 [ 00:09:23.001 { 00:09:23.001 "name": "dc873638-37ea-475a-8c26-d20a1068dbc9", 00:09:23.001 "aliases": [ 00:09:23.001 "lvs/lvol" 00:09:23.001 ], 00:09:23.001 "product_name": "Logical Volume", 00:09:23.001 "block_size": 4096, 00:09:23.001 "num_blocks": 38912, 00:09:23.001 "uuid": "dc873638-37ea-475a-8c26-d20a1068dbc9", 00:09:23.001 "assigned_rate_limits": { 00:09:23.001 "rw_ios_per_sec": 0, 00:09:23.001 "rw_mbytes_per_sec": 0, 00:09:23.001 "r_mbytes_per_sec": 0, 00:09:23.001 "w_mbytes_per_sec": 0 00:09:23.001 }, 00:09:23.001 "claimed": false, 00:09:23.001 "zoned": false, 00:09:23.001 "supported_io_types": { 00:09:23.001 "read": true, 00:09:23.001 "write": true, 00:09:23.001 "unmap": true, 00:09:23.001 "flush": false, 00:09:23.001 "reset": true, 00:09:23.001 "nvme_admin": false, 00:09:23.001 "nvme_io": false, 00:09:23.001 "nvme_io_md": false, 00:09:23.001 "write_zeroes": true, 00:09:23.001 "zcopy": false, 00:09:23.001 "get_zone_info": false, 00:09:23.001 "zone_management": false, 00:09:23.001 "zone_append": false, 00:09:23.001 "compare": false, 00:09:23.001 "compare_and_write": false, 00:09:23.001 "abort": false, 00:09:23.001 "seek_hole": true, 00:09:23.001 "seek_data": true, 00:09:23.001 "copy": false, 00:09:23.001 "nvme_iov_md": false 00:09:23.001 }, 00:09:23.001 "driver_specific": { 00:09:23.001 "lvol": { 00:09:23.001 "lvol_store_uuid": "27dc91a2-a8cd-4584-906c-b9cbbd5f7504", 00:09:23.001 "base_bdev": "aio_bdev", 00:09:23.001 "thin_provision": false, 00:09:23.001 "num_allocated_clusters": 38, 00:09:23.001 "snapshot": false, 00:09:23.001 "clone": false, 00:09:23.001 "esnap_clone": false 00:09:23.001 } 00:09:23.001 } 00:09:23.001 } 00:09:23.001 ] 00:09:23.001 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:23.001 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:23.001 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:23.001 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:23.001 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:23.001 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:23.261 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:23.261 00:53:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:23.521 [2024-11-19 00:53:29.989151] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:09:23.521 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:23.521 request: 00:09:23.521 { 00:09:23.521 "uuid": "27dc91a2-a8cd-4584-906c-b9cbbd5f7504", 00:09:23.521 "method": "bdev_lvol_get_lvstores", 00:09:23.521 "req_id": 1 00:09:23.521 } 00:09:23.521 Got JSON-RPC error response 00:09:23.521 response: 00:09:23.521 { 00:09:23.521 "code": -19, 00:09:23.521 "message": "No such device" 00:09:23.521 } 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:23.780 aio_bdev 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dc873638-37ea-475a-8c26-d20a1068dbc9 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=dc873638-37ea-475a-8c26-d20a1068dbc9 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.780 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:24.039 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc873638-37ea-475a-8c26-d20a1068dbc9 -t 2000 00:09:24.297 [ 00:09:24.297 { 00:09:24.297 "name": "dc873638-37ea-475a-8c26-d20a1068dbc9", 00:09:24.297 "aliases": [ 00:09:24.297 "lvs/lvol" 00:09:24.297 ], 00:09:24.297 "product_name": "Logical Volume", 00:09:24.297 "block_size": 4096, 00:09:24.297 "num_blocks": 38912, 00:09:24.297 "uuid": "dc873638-37ea-475a-8c26-d20a1068dbc9", 00:09:24.297 "assigned_rate_limits": { 00:09:24.297 "rw_ios_per_sec": 0, 00:09:24.297 "rw_mbytes_per_sec": 0, 00:09:24.297 "r_mbytes_per_sec": 0, 00:09:24.297 "w_mbytes_per_sec": 0 00:09:24.297 }, 00:09:24.297 "claimed": false, 00:09:24.297 "zoned": false, 00:09:24.297 "supported_io_types": { 00:09:24.297 "read": true, 00:09:24.297 "write": true, 00:09:24.297 "unmap": true, 00:09:24.297 "flush": false, 00:09:24.297 "reset": true, 00:09:24.297 "nvme_admin": false, 00:09:24.297 "nvme_io": false, 00:09:24.297 "nvme_io_md": false, 00:09:24.297 "write_zeroes": true, 00:09:24.297 "zcopy": false, 00:09:24.297 "get_zone_info": false, 00:09:24.297 "zone_management": false, 00:09:24.297 "zone_append": false, 00:09:24.297 "compare": false, 00:09:24.297 "compare_and_write": false, 00:09:24.297 "abort": false, 00:09:24.297 "seek_hole": true, 00:09:24.297 "seek_data": true, 00:09:24.297 "copy": false, 00:09:24.297 "nvme_iov_md": false 00:09:24.297 }, 00:09:24.297 "driver_specific": { 00:09:24.297 "lvol": { 00:09:24.297 "lvol_store_uuid": "27dc91a2-a8cd-4584-906c-b9cbbd5f7504", 00:09:24.297 "base_bdev": "aio_bdev", 00:09:24.297 "thin_provision": false, 00:09:24.297 "num_allocated_clusters": 38, 00:09:24.297 "snapshot": false, 00:09:24.297 "clone": false, 00:09:24.297 "esnap_clone": false 00:09:24.297 } 00:09:24.297 } 00:09:24.297 } 00:09:24.297 ] 00:09:24.297 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:24.297 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:24.297 00:53:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:24.554 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:24.554 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:24.554 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:24.554 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:24.554 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dc873638-37ea-475a-8c26-d20a1068dbc9 00:09:24.812 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27dc91a2-a8cd-4584-906c-b9cbbd5f7504 00:09:25.071 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.330 00:09:25.330 real 0m18.887s 00:09:25.330 user 0m49.327s 00:09:25.330 sys 0m2.999s 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:25.330 ************************************ 00:09:25.330 END TEST lvs_grow_dirty 00:09:25.330 ************************************ 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:25.330 nvmf_trace.0 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.330 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:25.331 rmmod nvme_rdma 00:09:25.331 rmmod nvme_fabrics 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 213823 ']' 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 213823 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 213823 ']' 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 213823 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 213823 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 213823' 00:09:25.331 killing process with pid 213823 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 213823 00:09:25.331 00:53:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 213823 00:09:26.710 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.710 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:26.710 00:09:26.710 real 0m44.298s 00:09:26.710 user 1m13.505s 00:09:26.710 sys 0m8.987s 00:09:26.710 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.710 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.710 ************************************ 00:09:26.710 END TEST nvmf_lvs_grow 00:09:26.710 ************************************ 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.711 ************************************ 00:09:26.711 START TEST nvmf_bdev_io_wait 00:09:26.711 ************************************ 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:26.711 * Looking for test storage... 00:09:26.711 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.711 --rc genhtml_branch_coverage=1 00:09:26.711 --rc genhtml_function_coverage=1 00:09:26.711 --rc genhtml_legend=1 00:09:26.711 --rc geninfo_all_blocks=1 00:09:26.711 --rc geninfo_unexecuted_blocks=1 00:09:26.711 00:09:26.711 ' 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.711 --rc genhtml_branch_coverage=1 00:09:26.711 --rc genhtml_function_coverage=1 00:09:26.711 --rc genhtml_legend=1 00:09:26.711 --rc geninfo_all_blocks=1 00:09:26.711 --rc geninfo_unexecuted_blocks=1 00:09:26.711 00:09:26.711 ' 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.711 --rc genhtml_branch_coverage=1 00:09:26.711 --rc genhtml_function_coverage=1 00:09:26.711 --rc genhtml_legend=1 00:09:26.711 --rc geninfo_all_blocks=1 00:09:26.711 --rc geninfo_unexecuted_blocks=1 00:09:26.711 00:09:26.711 ' 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.711 --rc genhtml_branch_coverage=1 00:09:26.711 --rc genhtml_function_coverage=1 00:09:26.711 --rc genhtml_legend=1 00:09:26.711 --rc geninfo_all_blocks=1 00:09:26.711 --rc geninfo_unexecuted_blocks=1 00:09:26.711 00:09:26.711 ' 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:26.711 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.712 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.712 00:53:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:33.287 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:33.287 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.287 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@405 -- # modinfo irdma 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:33.288 Found net devices under 0000:af:00.0: cvl_0_0 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:33.288 Found net devices under 0000:af:00.1: cvl_0_1 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.288 00:53:38 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo cvl_0_0 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo cvl_0_1 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:09:33.288 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:09:33.288 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:09:33.288 altname enp175s0f0np0 00:09:33.288 altname ens801f0np0 00:09:33.288 inet 192.168.100.8/24 scope global cvl_0_0 00:09:33.288 valid_lft forever preferred_lft forever 00:09:33.288 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:09:33.288 valid_lft forever preferred_lft forever 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:09:33.288 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:09:33.288 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:09:33.288 altname enp175s0f1np1 00:09:33.288 altname ens801f1np1 00:09:33.288 inet 192.168.100.9/24 scope global cvl_0_1 00:09:33.288 valid_lft forever preferred_lft forever 00:09:33.288 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:09:33.288 valid_lft forever preferred_lft forever 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo cvl_0_0 00:09:33.288 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo cvl_0_1 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:33.289 192.168.100.9' 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:33.289 192.168.100.9' 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:33.289 192.168.100.9' 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=217851 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 217851 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 217851 ']' 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.289 00:53:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.289 [2024-11-19 00:53:39.303207] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:33.289 [2024-11-19 00:53:39.303305] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.289 [2024-11-19 00:53:39.429410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.289 [2024-11-19 00:53:39.538590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.289 [2024-11-19 00:53:39.538641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.289 [2024-11-19 00:53:39.538651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.289 [2024-11-19 00:53:39.538677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.289 [2024-11-19 00:53:39.538686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.289 [2024-11-19 00:53:39.541354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.289 [2024-11-19 00:53:39.541423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.289 [2024-11-19 00:53:39.541490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.289 [2024-11-19 00:53:39.541514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.548 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.807 [2024-11-19 00:53:40.406227] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000029440/0x617000007c40) succeed. 00:09:33.807 [2024-11-19 00:53:40.415595] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x6120000295c0/0x617000007fc0) succeed. 00:09:33.807 [2024-11-19 00:53:40.415622] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.807 Malloc0 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.807 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.066 [2024-11-19 00:53:40.529165] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=218088 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=218090 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.066 { 00:09:34.066 "params": { 00:09:34.066 "name": "Nvme$subsystem", 00:09:34.066 "trtype": "$TEST_TRANSPORT", 00:09:34.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.066 "adrfam": "ipv4", 00:09:34.066 "trsvcid": "$NVMF_PORT", 00:09:34.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.066 "hdgst": ${hdgst:-false}, 00:09:34.066 "ddgst": ${ddgst:-false} 00:09:34.066 }, 00:09:34.066 "method": "bdev_nvme_attach_controller" 00:09:34.066 } 00:09:34.066 EOF 00:09:34.066 )") 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=218092 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.066 { 00:09:34.066 "params": { 00:09:34.066 "name": "Nvme$subsystem", 00:09:34.066 "trtype": "$TEST_TRANSPORT", 00:09:34.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.066 "adrfam": "ipv4", 00:09:34.066 "trsvcid": "$NVMF_PORT", 00:09:34.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.066 "hdgst": ${hdgst:-false}, 00:09:34.066 "ddgst": ${ddgst:-false} 00:09:34.066 }, 00:09:34.066 "method": "bdev_nvme_attach_controller" 00:09:34.066 } 00:09:34.066 EOF 00:09:34.066 )") 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=218095 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:34.066 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.067 { 00:09:34.067 "params": { 00:09:34.067 "name": "Nvme$subsystem", 00:09:34.067 "trtype": "$TEST_TRANSPORT", 00:09:34.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.067 "adrfam": "ipv4", 00:09:34.067 "trsvcid": "$NVMF_PORT", 00:09:34.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.067 "hdgst": ${hdgst:-false}, 00:09:34.067 "ddgst": ${ddgst:-false} 00:09:34.067 }, 00:09:34.067 "method": "bdev_nvme_attach_controller" 00:09:34.067 } 00:09:34.067 EOF 00:09:34.067 )") 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.067 { 00:09:34.067 "params": { 00:09:34.067 "name": "Nvme$subsystem", 00:09:34.067 "trtype": "$TEST_TRANSPORT", 00:09:34.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.067 "adrfam": "ipv4", 00:09:34.067 "trsvcid": "$NVMF_PORT", 00:09:34.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.067 "hdgst": ${hdgst:-false}, 00:09:34.067 "ddgst": ${ddgst:-false} 00:09:34.067 }, 00:09:34.067 "method": "bdev_nvme_attach_controller" 00:09:34.067 } 00:09:34.067 EOF 00:09:34.067 )") 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 218088 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.067 "params": { 00:09:34.067 "name": "Nvme1", 00:09:34.067 "trtype": "rdma", 00:09:34.067 "traddr": "192.168.100.8", 00:09:34.067 "adrfam": "ipv4", 00:09:34.067 "trsvcid": "4420", 00:09:34.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.067 "hdgst": false, 00:09:34.067 "ddgst": false 00:09:34.067 }, 00:09:34.067 "method": "bdev_nvme_attach_controller" 00:09:34.067 }' 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.067 "params": { 00:09:34.067 "name": "Nvme1", 00:09:34.067 "trtype": "rdma", 00:09:34.067 "traddr": "192.168.100.8", 00:09:34.067 "adrfam": "ipv4", 00:09:34.067 "trsvcid": "4420", 00:09:34.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.067 "hdgst": false, 00:09:34.067 "ddgst": false 00:09:34.067 }, 00:09:34.067 "method": "bdev_nvme_attach_controller" 00:09:34.067 }' 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.067 "params": { 00:09:34.067 "name": "Nvme1", 00:09:34.067 "trtype": "rdma", 00:09:34.067 "traddr": "192.168.100.8", 00:09:34.067 "adrfam": "ipv4", 00:09:34.067 "trsvcid": "4420", 00:09:34.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.067 "hdgst": false, 00:09:34.067 "ddgst": false 00:09:34.067 }, 00:09:34.067 "method": "bdev_nvme_attach_controller" 00:09:34.067 }' 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.067 00:53:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.067 "params": { 00:09:34.067 "name": "Nvme1", 00:09:34.067 "trtype": "rdma", 00:09:34.067 "traddr": "192.168.100.8", 00:09:34.067 "adrfam": "ipv4", 00:09:34.067 "trsvcid": "4420", 00:09:34.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.067 "hdgst": false, 00:09:34.067 "ddgst": false 00:09:34.067 }, 00:09:34.067 "method": "bdev_nvme_attach_controller" 00:09:34.067 }' 00:09:34.067 [2024-11-19 00:53:40.606738] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:34.067 [2024-11-19 00:53:40.606745] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:34.067 [2024-11-19 00:53:40.606752] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:34.067 [2024-11-19 00:53:40.606828] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 00:53:40.606829] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-11-19 00:53:40.606829] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:34.067 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:34.067 --proc-type=auto ] 00:09:34.067 [2024-11-19 00:53:40.612718] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:34.067 [2024-11-19 00:53:40.612802] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:34.326 [2024-11-19 00:53:40.846808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.326 [2024-11-19 00:53:40.947197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.326 [2024-11-19 00:53:40.951820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:34.326 [2024-11-19 00:53:41.001799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.584 [2024-11-19 00:53:41.056765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:34.584 [2024-11-19 00:53:41.103206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.584 [2024-11-19 00:53:41.112818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:34.584 [2024-11-19 00:53:41.204905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:34.842 Running I/O for 1 seconds... 00:09:34.842 Running I/O for 1 seconds... 00:09:34.842 Running I/O for 1 seconds... 00:09:35.101 Running I/O for 1 seconds... 00:09:36.036 16342.00 IOPS, 63.84 MiB/s 00:09:36.036 Latency(us) 00:09:36.036 [2024-11-18T23:53:42.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.036 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:36.036 Nvme1n1 : 1.01 16374.45 63.96 0.00 0.00 7791.07 5118.05 23468.13 00:09:36.036 [2024-11-18T23:53:42.729Z] =================================================================================================================== 00:09:36.036 [2024-11-18T23:53:42.729Z] Total : 16374.45 63.96 0.00 0.00 7791.07 5118.05 23468.13 00:09:36.036 14355.00 IOPS, 56.07 MiB/s 00:09:36.036 Latency(us) 00:09:36.036 [2024-11-18T23:53:42.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.036 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:36.036 Nvme1n1 : 1.01 14403.03 56.26 0.00 0.00 8856.30 5024.43 26963.38 00:09:36.036 [2024-11-18T23:53:42.729Z] =================================================================================================================== 00:09:36.036 [2024-11-18T23:53:42.729Z] Total : 14403.03 56.26 0.00 0.00 8856.30 5024.43 26963.38 00:09:36.036 227104.00 IOPS, 887.12 MiB/s 00:09:36.036 Latency(us) 00:09:36.036 [2024-11-18T23:53:42.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.036 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:36.036 Nvme1n1 : 1.00 226739.69 885.70 0.00 0.00 561.71 253.56 2512.21 00:09:36.036 [2024-11-18T23:53:42.729Z] =================================================================================================================== 00:09:36.036 [2024-11-18T23:53:42.729Z] Total : 226739.69 885.70 0.00 0.00 561.71 253.56 2512.21 00:09:36.036 16786.00 IOPS, 65.57 MiB/s 00:09:36.037 Latency(us) 00:09:36.037 [2024-11-18T23:53:42.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.037 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:36.037 Nvme1n1 : 1.01 16857.67 65.85 0.00 0.00 7571.55 3588.88 23967.45 00:09:36.037 [2024-11-18T23:53:42.730Z] =================================================================================================================== 00:09:36.037 [2024-11-18T23:53:42.730Z] Total : 16857.67 65.85 0.00 0.00 7571.55 3588.88 23967.45 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 218090 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 218092 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 218095 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.604 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:36.604 rmmod nvme_rdma 00:09:36.604 rmmod nvme_fabrics 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 217851 ']' 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 217851 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 217851 ']' 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 217851 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217851 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217851' 00:09:36.863 killing process with pid 217851 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 217851 00:09:36.863 00:53:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 217851 00:09:37.799 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.799 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:37.799 00:09:37.799 real 0m11.337s 00:09:37.799 user 0m29.307s 00:09:37.799 sys 0m6.177s 00:09:37.799 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.799 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.799 ************************************ 00:09:37.799 END TEST nvmf_bdev_io_wait 00:09:37.799 ************************************ 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.060 ************************************ 00:09:38.060 START TEST nvmf_queue_depth 00:09:38.060 ************************************ 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:38.060 * Looking for test storage... 00:09:38.060 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.060 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.060 --rc genhtml_branch_coverage=1 00:09:38.060 --rc genhtml_function_coverage=1 00:09:38.060 --rc genhtml_legend=1 00:09:38.060 --rc geninfo_all_blocks=1 00:09:38.060 --rc geninfo_unexecuted_blocks=1 00:09:38.060 00:09:38.061 ' 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.061 --rc genhtml_branch_coverage=1 00:09:38.061 --rc genhtml_function_coverage=1 00:09:38.061 --rc genhtml_legend=1 00:09:38.061 --rc geninfo_all_blocks=1 00:09:38.061 --rc geninfo_unexecuted_blocks=1 00:09:38.061 00:09:38.061 ' 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.061 --rc genhtml_branch_coverage=1 00:09:38.061 --rc genhtml_function_coverage=1 00:09:38.061 --rc genhtml_legend=1 00:09:38.061 --rc geninfo_all_blocks=1 00:09:38.061 --rc geninfo_unexecuted_blocks=1 00:09:38.061 00:09:38.061 ' 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.061 --rc genhtml_branch_coverage=1 00:09:38.061 --rc genhtml_function_coverage=1 00:09:38.061 --rc genhtml_legend=1 00:09:38.061 --rc geninfo_all_blocks=1 00:09:38.061 --rc geninfo_unexecuted_blocks=1 00:09:38.061 00:09:38.061 ' 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.061 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:38.321 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:38.321 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.321 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.321 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.321 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.321 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:09:38.321 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.321 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.321 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.321 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.322 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:38.322 00:53:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:44.901 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:44.901 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@405 -- # modinfo irdma 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:44.901 Found net devices under 0000:af:00.0: cvl_0_0 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.901 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:44.902 Found net devices under 0000:af:00.1: cvl_0_1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo cvl_0_0 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo cvl_0_1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:09:44.902 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:09:44.902 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:09:44.902 altname enp175s0f0np0 00:09:44.902 altname ens801f0np0 00:09:44.902 inet 192.168.100.8/24 scope global cvl_0_0 00:09:44.902 valid_lft forever preferred_lft forever 00:09:44.902 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:09:44.902 valid_lft forever preferred_lft forever 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:09:44.902 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:09:44.902 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:09:44.902 altname enp175s0f1np1 00:09:44.902 altname ens801f1np1 00:09:44.902 inet 192.168.100.9/24 scope global cvl_0_1 00:09:44.902 valid_lft forever preferred_lft forever 00:09:44.902 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:09:44.902 valid_lft forever preferred_lft forever 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo cvl_0_0 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo cvl_0_1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:44.902 192.168.100.9' 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:44.902 192.168.100.9' 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:44.902 192.168.100.9' 00:09:44.902 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=221966 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 221966 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 221966 ']' 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.903 00:53:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.903 [2024-11-19 00:53:50.678827] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:44.903 [2024-11-19 00:53:50.678921] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.903 [2024-11-19 00:53:50.810246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.903 [2024-11-19 00:53:50.916430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.903 [2024-11-19 00:53:50.916479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.903 [2024-11-19 00:53:50.916489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.903 [2024-11-19 00:53:50.916516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.903 [2024-11-19 00:53:50.916525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.903 [2024-11-19 00:53:50.918020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.903 [2024-11-19 00:53:51.544447] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000289c0/0x617000007c40) succeed. 00:09:44.903 [2024-11-19 00:53:51.553703] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000028b40/0x617000007fc0) succeed. 00:09:44.903 [2024-11-19 00:53:51.553736] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.903 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.162 Malloc0 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.162 [2024-11-19 00:53:51.676404] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=222088 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 222088 /var/tmp/bdevperf.sock 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 222088 ']' 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:45.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.162 00:53:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.162 [2024-11-19 00:53:51.754291] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:45.162 [2024-11-19 00:53:51.754391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222088 ] 00:09:45.421 [2024-11-19 00:53:51.881106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.421 [2024-11-19 00:53:51.992902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.990 00:53:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.990 00:53:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:45.990 00:53:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:45.990 00:53:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.990 00:53:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.990 NVMe0n1 00:09:45.990 00:53:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.990 00:53:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:46.249 Running I/O for 10 seconds... 00:09:48.123 14336.00 IOPS, 56.00 MiB/s [2024-11-18T23:53:56.194Z] 14678.00 IOPS, 57.34 MiB/s [2024-11-18T23:53:56.762Z] 14720.00 IOPS, 57.50 MiB/s [2024-11-18T23:53:58.142Z] 14848.00 IOPS, 58.00 MiB/s [2024-11-18T23:53:59.080Z] 14901.60 IOPS, 58.21 MiB/s [2024-11-18T23:54:00.019Z] 14873.50 IOPS, 58.10 MiB/s [2024-11-18T23:54:00.956Z] 14921.14 IOPS, 58.29 MiB/s [2024-11-18T23:54:01.893Z] 14865.50 IOPS, 58.07 MiB/s [2024-11-18T23:54:02.832Z] 14904.89 IOPS, 58.22 MiB/s [2024-11-18T23:54:02.832Z] 14918.20 IOPS, 58.27 MiB/s 00:09:56.139 Latency(us) 00:09:56.139 [2024-11-18T23:54:02.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.139 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:56.139 Verification LBA range: start 0x0 length 0x4000 00:09:56.139 NVMe0n1 : 10.05 14933.08 58.33 0.00 0.00 68340.29 13044.78 44689.31 00:09:56.139 [2024-11-18T23:54:02.832Z] =================================================================================================================== 00:09:56.139 [2024-11-18T23:54:02.832Z] Total : 14933.08 58.33 0.00 0.00 68340.29 13044.78 44689.31 00:09:56.399 { 00:09:56.399 "results": [ 00:09:56.399 { 00:09:56.399 "job": "NVMe0n1", 00:09:56.399 "core_mask": "0x1", 00:09:56.399 "workload": "verify", 00:09:56.399 "status": "finished", 00:09:56.399 "verify_range": { 00:09:56.399 "start": 0, 00:09:56.399 "length": 16384 00:09:56.399 }, 00:09:56.399 "queue_depth": 1024, 00:09:56.399 "io_size": 4096, 00:09:56.399 "runtime": 10.045619, 00:09:56.399 "iops": 14933.07679695995, 00:09:56.399 "mibps": 58.3323312381248, 00:09:56.399 "io_failed": 0, 00:09:56.399 "io_timeout": 0, 00:09:56.399 "avg_latency_us": 68340.2891205402, 00:09:56.399 "min_latency_us": 13044.784761904762, 00:09:56.399 "max_latency_us": 44689.310476190476 00:09:56.399 } 00:09:56.399 ], 00:09:56.399 "core_count": 1 00:09:56.399 } 00:09:56.399 00:54:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 222088 00:09:56.399 00:54:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 222088 ']' 00:09:56.399 00:54:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 222088 00:09:56.399 00:54:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:56.399 00:54:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.399 00:54:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 222088 00:09:56.399 00:54:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.399 00:54:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.399 00:54:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 222088' 00:09:56.399 killing process with pid 222088 00:09:56.399 00:54:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 222088 00:09:56.399 Received shutdown signal, test time was about 10.000000 seconds 00:09:56.399 00:09:56.399 Latency(us) 00:09:56.399 [2024-11-18T23:54:03.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.399 [2024-11-18T23:54:03.092Z] =================================================================================================================== 00:09:56.399 [2024-11-18T23:54:03.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:56.399 00:54:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 222088 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:57.339 rmmod nvme_rdma 00:09:57.339 rmmod nvme_fabrics 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 221966 ']' 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 221966 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 221966 ']' 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 221966 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221966 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221966' 00:09:57.339 killing process with pid 221966 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 221966 00:09:57.339 00:54:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 221966 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:58.721 00:09:58.721 real 0m20.635s 00:09:58.721 user 0m28.590s 00:09:58.721 sys 0m5.203s 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.721 ************************************ 00:09:58.721 END TEST nvmf_queue_depth 00:09:58.721 ************************************ 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.721 ************************************ 00:09:58.721 START TEST nvmf_target_multipath 00:09:58.721 ************************************ 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:58.721 * Looking for test storage... 00:09:58.721 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:58.721 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:58.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.982 --rc genhtml_branch_coverage=1 00:09:58.982 --rc genhtml_function_coverage=1 00:09:58.982 --rc genhtml_legend=1 00:09:58.982 --rc geninfo_all_blocks=1 00:09:58.982 --rc geninfo_unexecuted_blocks=1 00:09:58.982 00:09:58.982 ' 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:58.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.982 --rc genhtml_branch_coverage=1 00:09:58.982 --rc genhtml_function_coverage=1 00:09:58.982 --rc genhtml_legend=1 00:09:58.982 --rc geninfo_all_blocks=1 00:09:58.982 --rc geninfo_unexecuted_blocks=1 00:09:58.982 00:09:58.982 ' 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:58.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.982 --rc genhtml_branch_coverage=1 00:09:58.982 --rc genhtml_function_coverage=1 00:09:58.982 --rc genhtml_legend=1 00:09:58.982 --rc geninfo_all_blocks=1 00:09:58.982 --rc geninfo_unexecuted_blocks=1 00:09:58.982 00:09:58.982 ' 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:58.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.982 --rc genhtml_branch_coverage=1 00:09:58.982 --rc genhtml_function_coverage=1 00:09:58.982 --rc genhtml_legend=1 00:09:58.982 --rc geninfo_all_blocks=1 00:09:58.982 --rc geninfo_unexecuted_blocks=1 00:09:58.982 00:09:58.982 ' 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.982 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.983 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:58.983 00:54:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:05.563 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:05.563 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@405 -- # modinfo irdma 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:05.563 Found net devices under 0000:af:00.0: cvl_0_0 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:05.563 Found net devices under 0000:af:00.1: cvl_0_1 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:05.563 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:10:05.564 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:05.564 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:10:05.564 altname enp175s0f0np0 00:10:05.564 altname ens801f0np0 00:10:05.564 inet 192.168.100.8/24 scope global cvl_0_0 00:10:05.564 valid_lft forever preferred_lft forever 00:10:05.564 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:10:05.564 valid_lft forever preferred_lft forever 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:10:05.564 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:05.564 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:10:05.564 altname enp175s0f1np1 00:10:05.564 altname ens801f1np1 00:10:05.564 inet 192.168.100.9/24 scope global cvl_0_1 00:10:05.564 valid_lft forever preferred_lft forever 00:10:05.564 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:10:05.564 valid_lft forever preferred_lft forever 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:05.564 192.168.100.9' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:05.564 192.168.100.9' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:05.564 192.168.100.9' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:05.564 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:10:05.565 run this test only with TCP transport for now 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:05.565 rmmod nvme_rdma 00:10:05.565 rmmod nvme_fabrics 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:05.565 00:10:05.565 real 0m6.137s 00:10:05.565 user 0m1.877s 00:10:05.565 sys 0m4.390s 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:05.565 ************************************ 00:10:05.565 END TEST nvmf_target_multipath 00:10:05.565 ************************************ 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.565 ************************************ 00:10:05.565 START TEST nvmf_zcopy 00:10:05.565 ************************************ 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:05.565 * Looking for test storage... 00:10:05.565 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:05.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.565 --rc genhtml_branch_coverage=1 00:10:05.565 --rc genhtml_function_coverage=1 00:10:05.565 --rc genhtml_legend=1 00:10:05.565 --rc geninfo_all_blocks=1 00:10:05.565 --rc geninfo_unexecuted_blocks=1 00:10:05.565 00:10:05.565 ' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:05.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.565 --rc genhtml_branch_coverage=1 00:10:05.565 --rc genhtml_function_coverage=1 00:10:05.565 --rc genhtml_legend=1 00:10:05.565 --rc geninfo_all_blocks=1 00:10:05.565 --rc geninfo_unexecuted_blocks=1 00:10:05.565 00:10:05.565 ' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:05.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.565 --rc genhtml_branch_coverage=1 00:10:05.565 --rc genhtml_function_coverage=1 00:10:05.565 --rc genhtml_legend=1 00:10:05.565 --rc geninfo_all_blocks=1 00:10:05.565 --rc geninfo_unexecuted_blocks=1 00:10:05.565 00:10:05.565 ' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:05.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.565 --rc genhtml_branch_coverage=1 00:10:05.565 --rc genhtml_function_coverage=1 00:10:05.565 --rc genhtml_legend=1 00:10:05.565 --rc geninfo_all_blocks=1 00:10:05.565 --rc geninfo_unexecuted_blocks=1 00:10:05.565 00:10:05.565 ' 00:10:05.565 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.566 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.566 00:54:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:10.848 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:10.848 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@405 -- # modinfo irdma 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:10.848 Found net devices under 0000:af:00.0: cvl_0_0 00:10:10.848 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:10.849 Found net devices under 0000:af:00.1: cvl_0_1 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:10:10.849 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:10.849 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:10:10.849 altname enp175s0f0np0 00:10:10.849 altname ens801f0np0 00:10:10.849 inet 192.168.100.8/24 scope global cvl_0_0 00:10:10.849 valid_lft forever preferred_lft forever 00:10:10.849 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:10:10.849 valid_lft forever preferred_lft forever 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:10:10.849 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:10.849 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:10:10.849 altname enp175s0f1np1 00:10:10.849 altname ens801f1np1 00:10:10.849 inet 192.168.100.9/24 scope global cvl_0_1 00:10:10.849 valid_lft forever preferred_lft forever 00:10:10.849 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:10:10.849 valid_lft forever preferred_lft forever 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:10.849 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:10.850 192.168.100.9' 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:10.850 192.168.100.9' 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:10.850 192.168.100.9' 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:10.850 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=231164 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 231164 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 231164 ']' 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.109 00:54:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.109 [2024-11-19 00:54:17.630091] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:11.109 [2024-11-19 00:54:17.630185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.109 [2024-11-19 00:54:17.753372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.368 [2024-11-19 00:54:17.863776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.369 [2024-11-19 00:54:17.863822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.369 [2024-11-19 00:54:17.863832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.369 [2024-11-19 00:54:17.863843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.369 [2024-11-19 00:54:17.863852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.369 [2024-11-19 00:54:17.865258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:10:11.937 Unsupported transport: rdma 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:11.937 nvmf_trace.0 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:11.937 rmmod nvme_rdma 00:10:11.937 rmmod nvme_fabrics 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 231164 ']' 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 231164 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 231164 ']' 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 231164 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 231164 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 231164' 00:10:11.937 killing process with pid 231164 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 231164 00:10:11.937 00:54:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 231164 00:10:13.316 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.316 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:13.316 00:10:13.316 real 0m8.226s 00:10:13.316 user 0m4.118s 00:10:13.316 sys 0m4.805s 00:10:13.316 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.316 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.316 ************************************ 00:10:13.316 END TEST nvmf_zcopy 00:10:13.316 ************************************ 00:10:13.316 00:54:19 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:13.316 00:54:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.316 00:54:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.316 00:54:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.316 ************************************ 00:10:13.316 START TEST nvmf_nmic 00:10:13.316 ************************************ 00:10:13.316 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:13.316 * Looking for test storage... 00:10:13.316 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:10:13.316 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:13.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.317 --rc genhtml_branch_coverage=1 00:10:13.317 --rc genhtml_function_coverage=1 00:10:13.317 --rc genhtml_legend=1 00:10:13.317 --rc geninfo_all_blocks=1 00:10:13.317 --rc geninfo_unexecuted_blocks=1 00:10:13.317 00:10:13.317 ' 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:13.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.317 --rc genhtml_branch_coverage=1 00:10:13.317 --rc genhtml_function_coverage=1 00:10:13.317 --rc genhtml_legend=1 00:10:13.317 --rc geninfo_all_blocks=1 00:10:13.317 --rc geninfo_unexecuted_blocks=1 00:10:13.317 00:10:13.317 ' 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:13.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.317 --rc genhtml_branch_coverage=1 00:10:13.317 --rc genhtml_function_coverage=1 00:10:13.317 --rc genhtml_legend=1 00:10:13.317 --rc geninfo_all_blocks=1 00:10:13.317 --rc geninfo_unexecuted_blocks=1 00:10:13.317 00:10:13.317 ' 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:13.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.317 --rc genhtml_branch_coverage=1 00:10:13.317 --rc genhtml_function_coverage=1 00:10:13.317 --rc genhtml_legend=1 00:10:13.317 --rc geninfo_all_blocks=1 00:10:13.317 --rc geninfo_unexecuted_blocks=1 00:10:13.317 00:10:13.317 ' 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.317 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.318 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.318 00:54:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.895 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.895 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:19.895 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:19.895 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:19.895 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:19.896 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:19.896 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@405 -- # modinfo irdma 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:19.896 Found net devices under 0000:af:00.0: cvl_0_0 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:19.896 Found net devices under 0000:af:00.1: cvl_0_1 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:19.896 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:10:19.897 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:19.897 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:10:19.897 altname enp175s0f0np0 00:10:19.897 altname ens801f0np0 00:10:19.897 inet 192.168.100.8/24 scope global cvl_0_0 00:10:19.897 valid_lft forever preferred_lft forever 00:10:19.897 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:10:19.897 valid_lft forever preferred_lft forever 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:10:19.897 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:19.897 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:10:19.897 altname enp175s0f1np1 00:10:19.897 altname ens801f1np1 00:10:19.897 inet 192.168.100.9/24 scope global cvl_0_1 00:10:19.897 valid_lft forever preferred_lft forever 00:10:19.897 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:10:19.897 valid_lft forever preferred_lft forever 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:19.897 192.168.100.9' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:19.897 192.168.100.9' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:19.897 192.168.100.9' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=234676 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 234676 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 234676 ']' 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.897 00:54:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.897 [2024-11-19 00:54:25.911313] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:19.898 [2024-11-19 00:54:25.911412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.898 [2024-11-19 00:54:26.037686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.898 [2024-11-19 00:54:26.147095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.898 [2024-11-19 00:54:26.147141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.898 [2024-11-19 00:54:26.147151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.898 [2024-11-19 00:54:26.147161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.898 [2024-11-19 00:54:26.147168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.898 [2024-11-19 00:54:26.149645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.898 [2024-11-19 00:54:26.149733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.898 [2024-11-19 00:54:26.149802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.898 [2024-11-19 00:54:26.149823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.156 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.156 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:20.156 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.156 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:20.156 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.156 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.156 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:20.157 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.157 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.157 [2024-11-19 00:54:26.781549] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:10:20.157 [2024-11-19 00:54:26.790981] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:10:20.157 [2024-11-19 00:54:26.791010] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:10:20.157 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.157 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:20.157 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.157 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.415 Malloc0 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.416 [2024-11-19 00:54:26.921401] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:20.416 test case1: single bdev can't be used in multiple subsystems 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.416 [2024-11-19 00:54:26.953463] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:20.416 [2024-11-19 00:54:26.953492] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:20.416 [2024-11-19 00:54:26.953505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.416 request: 00:10:20.416 { 00:10:20.416 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:20.416 "namespace": { 00:10:20.416 "bdev_name": "Malloc0", 00:10:20.416 "no_auto_visible": false 00:10:20.416 }, 00:10:20.416 "method": "nvmf_subsystem_add_ns", 00:10:20.416 "req_id": 1 00:10:20.416 } 00:10:20.416 Got JSON-RPC error response 00:10:20.416 response: 00:10:20.416 { 00:10:20.416 "code": -32602, 00:10:20.416 "message": "Invalid parameters" 00:10:20.416 } 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:20.416 Adding namespace failed - expected result. 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:20.416 test case2: host connect to nvmf target in multiple paths 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.416 [2024-11-19 00:54:26.965549] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.416 00:54:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:20.675 00:54:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:10:20.933 00:54:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.933 00:54:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:20.933 00:54:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.933 00:54:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:20.933 00:54:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:22.833 00:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:22.833 00:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:22.833 00:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.833 00:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:22.833 00:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.833 00:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:22.833 00:54:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:22.833 [global] 00:10:22.833 thread=1 00:10:22.833 invalidate=1 00:10:22.833 rw=write 00:10:22.833 time_based=1 00:10:22.833 runtime=1 00:10:22.833 ioengine=libaio 00:10:22.833 direct=1 00:10:22.833 bs=4096 00:10:22.833 iodepth=1 00:10:22.833 norandommap=0 00:10:22.833 numjobs=1 00:10:22.833 00:10:22.833 verify_dump=1 00:10:22.833 verify_backlog=512 00:10:22.833 verify_state_save=0 00:10:22.833 do_verify=1 00:10:22.833 verify=crc32c-intel 00:10:22.833 [job0] 00:10:22.833 filename=/dev/nvme0n1 00:10:22.833 Could not set queue depth (nvme0n1) 00:10:23.397 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.397 fio-3.35 00:10:23.397 Starting 1 thread 00:10:24.771 00:10:24.771 job0: (groupid=0, jobs=1): err= 0: pid=235513: Tue Nov 19 00:54:31 2024 00:10:24.771 read: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(23.8MiB/1001msec) 00:10:24.771 slat (nsec): min=6472, max=25872, avg=7197.60, stdev=836.00 00:10:24.771 clat (usec): min=58, max=110, avg=72.21, stdev= 4.18 00:10:24.771 lat (usec): min=69, max=117, avg=79.40, stdev= 4.25 00:10:24.771 clat percentiles (usec): 00:10:24.771 | 1.00th=[ 65], 5.00th=[ 67], 10.00th=[ 68], 20.00th=[ 69], 00:10:24.771 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 74], 00:10:24.771 | 70.00th=[ 75], 80.00th=[ 76], 90.00th=[ 78], 95.00th=[ 80], 00:10:24.771 | 99.00th=[ 84], 99.50th=[ 85], 99.90th=[ 89], 99.95th=[ 92], 00:10:24.771 | 99.99th=[ 111] 00:10:24.771 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:10:24.771 slat (nsec): min=8450, max=47118, avg=9147.96, stdev=1003.44 00:10:24.771 clat (usec): min=54, max=375, avg=70.75, stdev= 5.76 00:10:24.771 lat (usec): min=69, max=384, avg=79.90, stdev= 5.94 00:10:24.771 clat percentiles (usec): 00:10:24.771 | 1.00th=[ 63], 5.00th=[ 65], 10.00th=[ 67], 20.00th=[ 68], 00:10:24.771 | 30.00th=[ 69], 40.00th=[ 70], 50.00th=[ 71], 60.00th=[ 72], 00:10:24.771 | 70.00th=[ 73], 80.00th=[ 75], 90.00th=[ 77], 95.00th=[ 79], 00:10:24.771 | 99.00th=[ 82], 99.50th=[ 84], 99.90th=[ 88], 99.95th=[ 94], 00:10:24.771 | 99.99th=[ 375] 00:10:24.771 bw ( KiB/s): min=24576, max=24576, per=100.00%, avg=24576.00, stdev= 0.00, samples=1 00:10:24.771 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:10:24.771 lat (usec) : 100=99.96%, 250=0.03%, 500=0.01% 00:10:24.771 cpu : usr=6.90%, sys=13.20%, ctx=12240, majf=0, minf=1 00:10:24.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.771 issued rwts: total=6096,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.771 00:10:24.771 Run status group 0 (all jobs): 00:10:24.771 READ: bw=23.8MiB/s (24.9MB/s), 23.8MiB/s-23.8MiB/s (24.9MB/s-24.9MB/s), io=23.8MiB (25.0MB), run=1001-1001msec 00:10:24.771 WRITE: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:10:24.771 00:10:24.771 Disk stats (read/write): 00:10:24.771 nvme0n1: ios=5433/5632, merge=0/0, ticks=354/356, in_queue=710, util=90.78% 00:10:24.771 00:54:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:26.672 rmmod nvme_rdma 00:10:26.672 rmmod nvme_fabrics 00:10:26.672 00:54:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 234676 ']' 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 234676 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 234676 ']' 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 234676 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234676 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.672 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234676' 00:10:26.672 killing process with pid 234676 00:10:26.673 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 234676 00:10:26.673 00:54:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 234676 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:28.050 00:10:28.050 real 0m14.668s 00:10:28.050 user 0m39.937s 00:10:28.050 sys 0m5.558s 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.050 ************************************ 00:10:28.050 END TEST nvmf_nmic 00:10:28.050 ************************************ 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:28.050 ************************************ 00:10:28.050 START TEST nvmf_fio_target 00:10:28.050 ************************************ 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:28.050 * Looking for test storage... 00:10:28.050 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.050 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:28.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.051 --rc genhtml_branch_coverage=1 00:10:28.051 --rc genhtml_function_coverage=1 00:10:28.051 --rc genhtml_legend=1 00:10:28.051 --rc geninfo_all_blocks=1 00:10:28.051 --rc geninfo_unexecuted_blocks=1 00:10:28.051 00:10:28.051 ' 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:28.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.051 --rc genhtml_branch_coverage=1 00:10:28.051 --rc genhtml_function_coverage=1 00:10:28.051 --rc genhtml_legend=1 00:10:28.051 --rc geninfo_all_blocks=1 00:10:28.051 --rc geninfo_unexecuted_blocks=1 00:10:28.051 00:10:28.051 ' 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:28.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.051 --rc genhtml_branch_coverage=1 00:10:28.051 --rc genhtml_function_coverage=1 00:10:28.051 --rc genhtml_legend=1 00:10:28.051 --rc geninfo_all_blocks=1 00:10:28.051 --rc geninfo_unexecuted_blocks=1 00:10:28.051 00:10:28.051 ' 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:28.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.051 --rc genhtml_branch_coverage=1 00:10:28.051 --rc genhtml_function_coverage=1 00:10:28.051 --rc genhtml_legend=1 00:10:28.051 --rc geninfo_all_blocks=1 00:10:28.051 --rc geninfo_unexecuted_blocks=1 00:10:28.051 00:10:28.051 ' 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.051 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.051 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.052 00:54:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:34.628 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.628 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:34.629 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@405 -- # modinfo irdma 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:34.629 Found net devices under 0000:af:00.0: cvl_0_0 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:34.629 Found net devices under 0000:af:00.1: cvl_0_1 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:10:34.629 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:34.629 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:10:34.629 altname enp175s0f0np0 00:10:34.629 altname ens801f0np0 00:10:34.629 inet 192.168.100.8/24 scope global cvl_0_0 00:10:34.629 valid_lft forever preferred_lft forever 00:10:34.629 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:10:34.629 valid_lft forever preferred_lft forever 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:10:34.629 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:10:34.629 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:10:34.629 altname enp175s0f1np1 00:10:34.629 altname ens801f1np1 00:10:34.629 inet 192.168.100.9/24 scope global cvl_0_1 00:10:34.629 valid_lft forever preferred_lft forever 00:10:34.629 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:10:34.629 valid_lft forever preferred_lft forever 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:34.629 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:34.630 192.168.100.9' 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:34.630 192.168.100.9' 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:34.630 192.168.100.9' 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=239269 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 239269 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 239269 ']' 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.630 00:54:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.630 [2024-11-19 00:54:40.678609] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:34.630 [2024-11-19 00:54:40.678699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.630 [2024-11-19 00:54:40.807137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.630 [2024-11-19 00:54:40.917758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.630 [2024-11-19 00:54:40.917804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.630 [2024-11-19 00:54:40.917814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.630 [2024-11-19 00:54:40.917842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.630 [2024-11-19 00:54:40.917850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.630 [2024-11-19 00:54:40.920268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.630 [2024-11-19 00:54:40.920369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.630 [2024-11-19 00:54:40.920416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.630 [2024-11-19 00:54:40.920394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.889 00:54:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.889 00:54:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:34.889 00:54:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:34.889 00:54:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:34.889 00:54:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.889 00:54:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.889 00:54:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:35.148 [2024-11-19 00:54:41.707825] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:10:35.148 [2024-11-19 00:54:41.717244] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:10:35.148 [2024-11-19 00:54:41.717273] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:10:35.148 00:54:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.406 00:54:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:35.406 00:54:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.664 00:54:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:35.665 00:54:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.924 00:54:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:35.924 00:54:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.182 00:54:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:36.182 00:54:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:36.441 00:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.699 00:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:36.699 00:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.957 00:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:36.957 00:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.216 00:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:37.216 00:54:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:37.474 00:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:37.733 00:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:37.733 00:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.991 00:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:37.991 00:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.991 00:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:38.249 [2024-11-19 00:54:44.832861] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:38.250 00:54:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:38.508 00:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:38.772 00:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:39.030 00:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:39.030 00:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:39.030 00:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.030 00:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:39.030 00:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:39.030 00:54:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:40.929 00:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:40.929 00:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:40.929 00:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.929 00:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:40.929 00:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.929 00:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:40.929 00:54:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:40.929 [global] 00:10:40.929 thread=1 00:10:40.929 invalidate=1 00:10:40.929 rw=write 00:10:40.929 time_based=1 00:10:40.929 runtime=1 00:10:40.929 ioengine=libaio 00:10:40.929 direct=1 00:10:40.929 bs=4096 00:10:40.929 iodepth=1 00:10:40.929 norandommap=0 00:10:40.929 numjobs=1 00:10:40.929 00:10:40.929 verify_dump=1 00:10:40.929 verify_backlog=512 00:10:40.929 verify_state_save=0 00:10:40.929 do_verify=1 00:10:40.929 verify=crc32c-intel 00:10:40.929 [job0] 00:10:40.929 filename=/dev/nvme0n1 00:10:40.929 [job1] 00:10:40.929 filename=/dev/nvme0n2 00:10:40.929 [job2] 00:10:40.929 filename=/dev/nvme0n3 00:10:40.929 [job3] 00:10:40.929 filename=/dev/nvme0n4 00:10:40.929 Could not set queue depth (nvme0n1) 00:10:40.929 Could not set queue depth (nvme0n2) 00:10:40.929 Could not set queue depth (nvme0n3) 00:10:40.929 Could not set queue depth (nvme0n4) 00:10:41.188 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.188 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.188 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.188 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.188 fio-3.35 00:10:41.188 Starting 4 threads 00:10:42.566 00:10:42.566 job0: (groupid=0, jobs=1): err= 0: pid=240806: Tue Nov 19 00:54:49 2024 00:10:42.566 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:42.566 slat (nsec): min=6350, max=36403, avg=7404.56, stdev=894.24 00:10:42.566 clat (usec): min=85, max=802, avg=133.31, stdev=15.65 00:10:42.566 lat (usec): min=92, max=809, avg=140.71, stdev=15.66 00:10:42.566 clat percentiles (usec): 00:10:42.566 | 1.00th=[ 98], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 125], 00:10:42.566 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:10:42.566 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 151], 00:10:42.566 | 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 221], 00:10:42.566 | 99.99th=[ 799] 00:10:42.566 write: IOPS=3618, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec); 0 zone resets 00:10:42.566 slat (nsec): min=8321, max=52560, avg=9430.08, stdev=1145.69 00:10:42.566 clat (usec): min=81, max=172, avg=123.24, stdev=14.70 00:10:42.566 lat (usec): min=91, max=181, avg=132.67, stdev=14.69 00:10:42.566 clat percentiles (usec): 00:10:42.566 | 1.00th=[ 88], 5.00th=[ 93], 10.00th=[ 99], 20.00th=[ 114], 00:10:42.566 | 30.00th=[ 119], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:10:42.566 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 145], 00:10:42.566 | 99.00th=[ 151], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 161], 00:10:42.566 | 99.99th=[ 174] 00:10:42.566 bw ( KiB/s): min=16384, max=16384, per=25.40%, avg=16384.00, stdev= 0.00, samples=1 00:10:42.566 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:42.566 lat (usec) : 100=6.01%, 250=93.98%, 1000=0.01% 00:10:42.566 cpu : usr=3.80%, sys=8.70%, ctx=7206, majf=0, minf=1 00:10:42.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.566 issued rwts: total=3584,3622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.566 job1: (groupid=0, jobs=1): err= 0: pid=240807: Tue Nov 19 00:54:49 2024 00:10:42.566 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:42.566 slat (nsec): min=7244, max=25166, avg=10164.50, stdev=1358.71 00:10:42.566 clat (usec): min=86, max=179, avg=129.56, stdev= 9.44 00:10:42.566 lat (usec): min=97, max=189, avg=139.72, stdev= 9.52 00:10:42.566 clat percentiles (usec): 00:10:42.566 | 1.00th=[ 100], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 123], 00:10:42.566 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:10:42.566 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 145], 00:10:42.566 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 161], 99.95th=[ 167], 00:10:42.566 | 99.99th=[ 180] 00:10:42.566 write: IOPS=3607, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec); 0 zone resets 00:10:42.566 slat (nsec): min=9889, max=46133, avg=12942.21, stdev=1660.09 00:10:42.566 clat (usec): min=81, max=181, avg=119.52, stdev=13.02 00:10:42.566 lat (usec): min=94, max=195, avg=132.46, stdev=13.15 00:10:42.566 clat percentiles (usec): 00:10:42.566 | 1.00th=[ 87], 5.00th=[ 92], 10.00th=[ 97], 20.00th=[ 112], 00:10:42.566 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 125], 00:10:42.566 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 137], 00:10:42.566 | 99.00th=[ 145], 99.50th=[ 147], 99.90th=[ 155], 99.95th=[ 163], 00:10:42.566 | 99.99th=[ 182] 00:10:42.566 bw ( KiB/s): min=16384, max=16384, per=25.40%, avg=16384.00, stdev= 0.00, samples=1 00:10:42.566 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:42.566 lat (usec) : 100=6.39%, 250=93.61% 00:10:42.566 cpu : usr=5.30%, sys=12.30%, ctx=7195, majf=0, minf=2 00:10:42.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.566 issued rwts: total=3584,3611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.566 job2: (groupid=0, jobs=1): err= 0: pid=240812: Tue Nov 19 00:54:49 2024 00:10:42.566 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:10:42.566 slat (nsec): min=4751, max=27074, avg=7581.79, stdev=1227.80 00:10:42.566 clat (usec): min=89, max=319, avg=109.96, stdev= 8.80 00:10:42.566 lat (usec): min=96, max=331, avg=117.54, stdev= 9.00 00:10:42.566 clat percentiles (usec): 00:10:42.566 | 1.00th=[ 97], 5.00th=[ 100], 10.00th=[ 102], 20.00th=[ 104], 00:10:42.566 | 30.00th=[ 106], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 111], 00:10:42.566 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 120], 95.00th=[ 125], 00:10:42.566 | 99.00th=[ 135], 99.50th=[ 139], 99.90th=[ 143], 99.95th=[ 159], 00:10:42.566 | 99.99th=[ 318] 00:10:42.566 write: IOPS=4297, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1001msec); 0 zone resets 00:10:42.566 slat (nsec): min=8489, max=50352, avg=9839.26, stdev=1735.07 00:10:42.566 clat (usec): min=89, max=190, avg=106.58, stdev= 7.87 00:10:42.566 lat (usec): min=98, max=225, avg=116.42, stdev= 8.32 00:10:42.566 clat percentiles (usec): 00:10:42.566 | 1.00th=[ 94], 5.00th=[ 96], 10.00th=[ 98], 20.00th=[ 100], 00:10:42.566 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 105], 60.00th=[ 108], 00:10:42.566 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 122], 00:10:42.566 | 99.00th=[ 130], 99.50th=[ 135], 99.90th=[ 147], 99.95th=[ 159], 00:10:42.566 | 99.99th=[ 192] 00:10:42.566 bw ( KiB/s): min=17040, max=17040, per=26.42%, avg=17040.00, stdev= 0.00, samples=1 00:10:42.566 iops : min= 4260, max= 4260, avg=4260.00, stdev= 0.00, samples=1 00:10:42.566 lat (usec) : 100=12.71%, 250=87.27%, 500=0.02% 00:10:42.566 cpu : usr=4.70%, sys=10.00%, ctx=8398, majf=0, minf=1 00:10:42.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.566 issued rwts: total=4096,4302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.566 job3: (groupid=0, jobs=1): err= 0: pid=240813: Tue Nov 19 00:54:49 2024 00:10:42.566 read: IOPS=4230, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1001msec) 00:10:42.566 slat (nsec): min=6317, max=22174, avg=7542.60, stdev=868.18 00:10:42.566 clat (usec): min=80, max=639, avg=104.26, stdev=10.41 00:10:42.566 lat (usec): min=97, max=646, avg=111.80, stdev=10.44 00:10:42.566 clat percentiles (usec): 00:10:42.567 | 1.00th=[ 93], 5.00th=[ 96], 10.00th=[ 97], 20.00th=[ 99], 00:10:42.567 | 30.00th=[ 101], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 105], 00:10:42.567 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 113], 95.00th=[ 116], 00:10:42.567 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 135], 99.95th=[ 143], 00:10:42.567 | 99.99th=[ 644] 00:10:42.567 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:42.567 slat (nsec): min=8293, max=47317, avg=9628.74, stdev=1052.76 00:10:42.567 clat (usec): min=86, max=168, avg=100.63, stdev= 6.42 00:10:42.567 lat (usec): min=96, max=215, avg=110.26, stdev= 6.59 00:10:42.567 clat percentiles (usec): 00:10:42.567 | 1.00th=[ 90], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 96], 00:10:42.567 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 100], 60.00th=[ 101], 00:10:42.567 | 70.00th=[ 103], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 114], 00:10:42.567 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 131], 99.95th=[ 133], 00:10:42.567 | 99.99th=[ 169] 00:10:42.567 bw ( KiB/s): min=18896, max=18896, per=29.29%, avg=18896.00, stdev= 0.00, samples=1 00:10:42.567 iops : min= 4724, max= 4724, avg=4724.00, stdev= 0.00, samples=1 00:10:42.567 lat (usec) : 100=39.64%, 250=60.35%, 750=0.01% 00:10:42.567 cpu : usr=5.50%, sys=9.80%, ctx=8844, majf=0, minf=1 00:10:42.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.567 issued rwts: total=4235,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.567 00:10:42.567 Run status group 0 (all jobs): 00:10:42.567 READ: bw=60.5MiB/s (63.4MB/s), 14.0MiB/s-16.5MiB/s (14.7MB/s-17.3MB/s), io=60.5MiB (63.5MB), run=1001-1001msec 00:10:42.567 WRITE: bw=63.0MiB/s (66.1MB/s), 14.1MiB/s-18.0MiB/s (14.8MB/s-18.9MB/s), io=63.1MiB (66.1MB), run=1001-1001msec 00:10:42.567 00:10:42.567 Disk stats (read/write): 00:10:42.567 nvme0n1: ios=2850/3072, merge=0/0, ticks=385/349, in_queue=734, util=82.46% 00:10:42.567 nvme0n2: ios=2784/3072, merge=0/0, ticks=337/330, in_queue=667, util=83.37% 00:10:42.567 nvme0n3: ios=3224/3584, merge=0/0, ticks=334/348, in_queue=682, util=87.69% 00:10:42.567 nvme0n4: ios=3566/3584, merge=0/0, ticks=356/331, in_queue=687, util=89.23% 00:10:42.567 00:54:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:42.567 [global] 00:10:42.567 thread=1 00:10:42.567 invalidate=1 00:10:42.567 rw=randwrite 00:10:42.567 time_based=1 00:10:42.567 runtime=1 00:10:42.567 ioengine=libaio 00:10:42.567 direct=1 00:10:42.567 bs=4096 00:10:42.567 iodepth=1 00:10:42.567 norandommap=0 00:10:42.567 numjobs=1 00:10:42.567 00:10:42.567 verify_dump=1 00:10:42.567 verify_backlog=512 00:10:42.567 verify_state_save=0 00:10:42.567 do_verify=1 00:10:42.567 verify=crc32c-intel 00:10:42.567 [job0] 00:10:42.567 filename=/dev/nvme0n1 00:10:42.567 [job1] 00:10:42.567 filename=/dev/nvme0n2 00:10:42.567 [job2] 00:10:42.567 filename=/dev/nvme0n3 00:10:42.567 [job3] 00:10:42.567 filename=/dev/nvme0n4 00:10:42.567 Could not set queue depth (nvme0n1) 00:10:42.567 Could not set queue depth (nvme0n2) 00:10:42.567 Could not set queue depth (nvme0n3) 00:10:42.567 Could not set queue depth (nvme0n4) 00:10:42.825 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.825 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.825 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.825 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.825 fio-3.35 00:10:42.825 Starting 4 threads 00:10:44.200 00:10:44.200 job0: (groupid=0, jobs=1): err= 0: pid=241180: Tue Nov 19 00:54:50 2024 00:10:44.200 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:44.200 slat (nsec): min=6727, max=33058, avg=10703.99, stdev=1620.64 00:10:44.200 clat (usec): min=80, max=358, avg=172.63, stdev=24.69 00:10:44.200 lat (usec): min=87, max=370, avg=183.34, stdev=25.43 00:10:44.200 clat percentiles (usec): 00:10:44.200 | 1.00th=[ 89], 5.00th=[ 96], 10.00th=[ 163], 20.00th=[ 169], 00:10:44.200 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:10:44.200 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:10:44.200 | 99.00th=[ 217], 99.50th=[ 233], 99.90th=[ 281], 99.95th=[ 285], 00:10:44.200 | 99.99th=[ 359] 00:10:44.200 write: IOPS=2970, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec); 0 zone resets 00:10:44.200 slat (nsec): min=7843, max=47907, avg=12236.98, stdev=2460.27 00:10:44.200 clat (usec): min=74, max=837, avg=160.83, stdev=38.76 00:10:44.200 lat (usec): min=83, max=846, avg=173.06, stdev=39.72 00:10:44.200 clat percentiles (usec): 00:10:44.200 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 93], 20.00th=[ 157], 00:10:44.200 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:10:44.200 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 194], 00:10:44.200 | 99.00th=[ 249], 99.50th=[ 330], 99.90th=[ 486], 99.95th=[ 537], 00:10:44.200 | 99.99th=[ 840] 00:10:44.200 bw ( KiB/s): min=12288, max=12288, per=27.06%, avg=12288.00, stdev= 0.00, samples=1 00:10:44.200 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:44.200 lat (usec) : 100=10.48%, 250=88.96%, 500=0.52%, 750=0.02%, 1000=0.02% 00:10:44.200 cpu : usr=3.90%, sys=9.10%, ctx=5537, majf=0, minf=1 00:10:44.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.200 issued rwts: total=2560,2973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.200 job1: (groupid=0, jobs=1): err= 0: pid=241181: Tue Nov 19 00:54:50 2024 00:10:44.200 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:44.200 slat (nsec): min=7106, max=24636, avg=10956.69, stdev=1323.65 00:10:44.200 clat (usec): min=96, max=470, avg=178.82, stdev=16.84 00:10:44.200 lat (usec): min=106, max=482, avg=189.77, stdev=16.80 00:10:44.200 clat percentiles (usec): 00:10:44.200 | 1.00th=[ 137], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:10:44.200 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:10:44.200 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 200], 00:10:44.200 | 99.00th=[ 227], 99.50th=[ 247], 99.90th=[ 404], 99.95th=[ 453], 00:10:44.200 | 99.99th=[ 469] 00:10:44.200 write: IOPS=2817, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec); 0 zone resets 00:10:44.200 slat (nsec): min=7982, max=39613, avg=12208.27, stdev=2084.03 00:10:44.200 clat (usec): min=88, max=571, avg=164.47, stdev=31.49 00:10:44.200 lat (usec): min=97, max=583, avg=176.68, stdev=32.32 00:10:44.200 clat percentiles (usec): 00:10:44.200 | 1.00th=[ 93], 5.00th=[ 99], 10.00th=[ 105], 20.00th=[ 159], 00:10:44.200 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:10:44.200 | 70.00th=[ 176], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 196], 00:10:44.200 | 99.00th=[ 233], 99.50th=[ 281], 99.90th=[ 486], 99.95th=[ 537], 00:10:44.200 | 99.99th=[ 570] 00:10:44.200 bw ( KiB/s): min=12288, max=12288, per=27.06%, avg=12288.00, stdev= 0.00, samples=1 00:10:44.200 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:44.200 lat (usec) : 100=3.38%, 250=96.02%, 500=0.56%, 750=0.04% 00:10:44.200 cpu : usr=3.90%, sys=8.90%, ctx=5381, majf=0, minf=1 00:10:44.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.200 issued rwts: total=2560,2820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.200 job2: (groupid=0, jobs=1): err= 0: pid=241182: Tue Nov 19 00:54:50 2024 00:10:44.200 read: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec) 00:10:44.200 slat (nsec): min=6545, max=26051, avg=7586.97, stdev=810.54 00:10:44.200 clat (usec): min=101, max=452, avg=182.50, stdev=15.00 00:10:44.200 lat (usec): min=108, max=478, avg=190.08, stdev=15.15 00:10:44.200 clat percentiles (usec): 00:10:44.200 | 1.00th=[ 121], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:10:44.200 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 184], 00:10:44.200 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:10:44.200 | 99.00th=[ 227], 99.50th=[ 233], 99.90th=[ 269], 99.95th=[ 367], 00:10:44.200 | 99.99th=[ 453] 00:10:44.200 write: IOPS=2778, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1000msec); 0 zone resets 00:10:44.200 slat (nsec): min=7787, max=51017, avg=9434.28, stdev=1272.35 00:10:44.200 clat (usec): min=93, max=703, avg=171.45, stdev=27.71 00:10:44.200 lat (usec): min=102, max=713, avg=180.89, stdev=27.77 00:10:44.200 clat percentiles (usec): 00:10:44.200 | 1.00th=[ 104], 5.00th=[ 114], 10.00th=[ 153], 20.00th=[ 165], 00:10:44.200 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:10:44.200 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 196], 00:10:44.200 | 99.00th=[ 243], 99.50th=[ 285], 99.90th=[ 445], 99.95th=[ 545], 00:10:44.200 | 99.99th=[ 701] 00:10:44.200 bw ( KiB/s): min=12288, max=12288, per=27.06%, avg=12288.00, stdev= 0.00, samples=1 00:10:44.200 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:44.200 lat (usec) : 100=0.19%, 250=99.33%, 500=0.45%, 750=0.04% 00:10:44.200 cpu : usr=3.20%, sys=6.10%, ctx=5338, majf=0, minf=1 00:10:44.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.200 issued rwts: total=2560,2778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.200 job3: (groupid=0, jobs=1): err= 0: pid=241183: Tue Nov 19 00:54:50 2024 00:10:44.200 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:44.200 slat (nsec): min=6640, max=28167, avg=7599.40, stdev=830.12 00:10:44.200 clat (usec): min=91, max=364, avg=177.93, stdev=22.81 00:10:44.200 lat (usec): min=99, max=371, avg=185.52, stdev=22.80 00:10:44.200 clat percentiles (usec): 00:10:44.200 | 1.00th=[ 99], 5.00th=[ 109], 10.00th=[ 167], 20.00th=[ 174], 00:10:44.200 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 184], 00:10:44.200 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 194], 95.00th=[ 200], 00:10:44.200 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 253], 99.95th=[ 277], 00:10:44.200 | 99.99th=[ 367] 00:10:44.200 write: IOPS=2791, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec); 0 zone resets 00:10:44.200 slat (nsec): min=8129, max=42381, avg=9352.49, stdev=1316.32 00:10:44.200 clat (usec): min=91, max=679, avg=174.67, stdev=26.19 00:10:44.200 lat (usec): min=101, max=688, avg=184.02, stdev=26.37 00:10:44.200 clat percentiles (usec): 00:10:44.200 | 1.00th=[ 102], 5.00th=[ 147], 10.00th=[ 161], 20.00th=[ 167], 00:10:44.200 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:10:44.200 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 200], 00:10:44.200 | 99.00th=[ 247], 99.50th=[ 310], 99.90th=[ 474], 99.95th=[ 644], 00:10:44.200 | 99.99th=[ 676] 00:10:44.200 bw ( KiB/s): min=12288, max=12288, per=27.06%, avg=12288.00, stdev= 0.00, samples=1 00:10:44.200 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:44.200 lat (usec) : 100=1.03%, 250=98.43%, 500=0.50%, 750=0.04% 00:10:44.200 cpu : usr=3.10%, sys=6.30%, ctx=5354, majf=0, minf=1 00:10:44.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.200 issued rwts: total=2560,2794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.200 00:10:44.200 Run status group 0 (all jobs): 00:10:44.200 READ: bw=40.0MiB/s (41.9MB/s), 9.99MiB/s-10.0MiB/s (10.5MB/s-10.5MB/s), io=40.0MiB (41.9MB), run=1000-1001msec 00:10:44.200 WRITE: bw=44.3MiB/s (46.5MB/s), 10.9MiB/s-11.6MiB/s (11.4MB/s-12.2MB/s), io=44.4MiB (46.6MB), run=1000-1001msec 00:10:44.200 00:10:44.200 Disk stats (read/write): 00:10:44.200 nvme0n1: ios=2098/2439, merge=0/0, ticks=366/395, in_queue=761, util=86.47% 00:10:44.200 nvme0n2: ios=2048/2557, merge=0/0, ticks=341/393, in_queue=734, util=86.60% 00:10:44.200 nvme0n3: ios=2048/2515, merge=0/0, ticks=362/406, in_queue=768, util=88.95% 00:10:44.200 nvme0n4: ios=2048/2532, merge=0/0, ticks=342/420, in_queue=762, util=89.70% 00:10:44.200 00:54:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:44.200 [global] 00:10:44.200 thread=1 00:10:44.200 invalidate=1 00:10:44.200 rw=write 00:10:44.200 time_based=1 00:10:44.200 runtime=1 00:10:44.200 ioengine=libaio 00:10:44.200 direct=1 00:10:44.200 bs=4096 00:10:44.200 iodepth=128 00:10:44.200 norandommap=0 00:10:44.200 numjobs=1 00:10:44.200 00:10:44.200 verify_dump=1 00:10:44.200 verify_backlog=512 00:10:44.200 verify_state_save=0 00:10:44.200 do_verify=1 00:10:44.200 verify=crc32c-intel 00:10:44.200 [job0] 00:10:44.200 filename=/dev/nvme0n1 00:10:44.200 [job1] 00:10:44.200 filename=/dev/nvme0n2 00:10:44.200 [job2] 00:10:44.200 filename=/dev/nvme0n3 00:10:44.200 [job3] 00:10:44.200 filename=/dev/nvme0n4 00:10:44.200 Could not set queue depth (nvme0n1) 00:10:44.200 Could not set queue depth (nvme0n2) 00:10:44.200 Could not set queue depth (nvme0n3) 00:10:44.200 Could not set queue depth (nvme0n4) 00:10:44.458 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.458 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.458 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.458 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.458 fio-3.35 00:10:44.458 Starting 4 threads 00:10:45.870 00:10:45.870 job0: (groupid=0, jobs=1): err= 0: pid=241549: Tue Nov 19 00:54:52 2024 00:10:45.871 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:10:45.871 slat (nsec): min=1435, max=4501.1k, avg=106988.41, stdev=507348.47 00:10:45.871 clat (usec): min=4550, max=21700, avg=13565.18, stdev=6724.91 00:10:45.871 lat (usec): min=4555, max=21703, avg=13672.17, stdev=6760.56 00:10:45.871 clat percentiles (usec): 00:10:45.871 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 6783], 00:10:45.871 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[11076], 60.00th=[20055], 00:10:45.871 | 70.00th=[20317], 80.00th=[20579], 90.00th=[20841], 95.00th=[20841], 00:10:45.871 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:10:45.871 | 99.99th=[21627] 00:10:45.871 write: IOPS=4875, BW=19.0MiB/s (20.0MB/s)(19.1MiB/1001msec); 0 zone resets 00:10:45.871 slat (usec): min=2, max=4323, avg=100.68, stdev=485.27 00:10:45.871 clat (usec): min=289, max=20539, avg=13122.66, stdev=6440.18 00:10:45.871 lat (usec): min=1068, max=20542, avg=13223.34, stdev=6472.52 00:10:45.871 clat percentiles (usec): 00:10:45.871 | 1.00th=[ 3130], 5.00th=[ 5604], 10.00th=[ 6063], 20.00th=[ 6390], 00:10:45.871 | 30.00th=[ 6783], 40.00th=[ 7177], 50.00th=[18220], 60.00th=[19006], 00:10:45.871 | 70.00th=[19268], 80.00th=[19530], 90.00th=[19530], 95.00th=[19792], 00:10:45.871 | 99.00th=[20317], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:10:45.871 | 99.99th=[20579] 00:10:45.871 bw ( KiB/s): min=12680, max=12680, per=14.31%, avg=12680.00, stdev= 0.00, samples=1 00:10:45.871 iops : min= 3170, max= 3170, avg=3170.00, stdev= 0.00, samples=1 00:10:45.871 lat (usec) : 500=0.01% 00:10:45.871 lat (msec) : 2=0.30%, 4=0.38%, 10=47.87%, 20=28.75%, 50=22.69% 00:10:45.871 cpu : usr=1.90%, sys=4.50%, ctx=2859, majf=0, minf=1 00:10:45.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.871 issued rwts: total=4608,4880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.871 job1: (groupid=0, jobs=1): err= 0: pid=241551: Tue Nov 19 00:54:52 2024 00:10:45.871 read: IOPS=9689, BW=37.8MiB/s (39.7MB/s)(38.0MiB/1004msec) 00:10:45.871 slat (nsec): min=1438, max=1218.2k, avg=48240.86, stdev=169430.67 00:10:45.871 clat (usec): min=5087, max=9076, avg=6331.74, stdev=589.04 00:10:45.871 lat (usec): min=5105, max=9102, avg=6379.98, stdev=608.19 00:10:45.871 clat percentiles (usec): 00:10:45.871 | 1.00th=[ 5538], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5932], 00:10:45.871 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:10:45.871 | 70.00th=[ 6456], 80.00th=[ 6849], 90.00th=[ 7242], 95.00th=[ 7504], 00:10:45.871 | 99.00th=[ 8160], 99.50th=[ 8291], 99.90th=[ 8586], 99.95th=[ 8586], 00:10:45.871 | 99.99th=[ 9110] 00:10:45.871 write: IOPS=10.2k, BW=39.7MiB/s (41.6MB/s)(39.9MiB/1004msec); 0 zone resets 00:10:45.871 slat (usec): min=2, max=3384, avg=49.44, stdev=189.42 00:10:45.871 clat (usec): min=2406, max=21123, avg=6416.18, stdev=2381.63 00:10:45.871 lat (usec): min=4691, max=21811, avg=6465.62, stdev=2399.48 00:10:45.871 clat percentiles (usec): 00:10:45.871 | 1.00th=[ 5080], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5538], 00:10:45.871 | 30.00th=[ 5669], 40.00th=[ 5735], 50.00th=[ 5800], 60.00th=[ 5932], 00:10:45.871 | 70.00th=[ 6128], 80.00th=[ 6521], 90.00th=[ 6980], 95.00th=[ 7635], 00:10:45.871 | 99.00th=[19006], 99.50th=[19530], 99.90th=[21103], 99.95th=[21103], 00:10:45.871 | 99.99th=[21103] 00:10:45.871 bw ( KiB/s): min=36864, max=43792, per=45.52%, avg=40328.00, stdev=4898.84, samples=2 00:10:45.871 iops : min= 9216, max=10948, avg=10082.00, stdev=1224.71, samples=2 00:10:45.871 lat (msec) : 4=0.01%, 10=97.91%, 20=1.93%, 50=0.15% 00:10:45.871 cpu : usr=3.89%, sys=6.38%, ctx=1553, majf=0, minf=2 00:10:45.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.871 issued rwts: total=9728,10209,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.871 job2: (groupid=0, jobs=1): err= 0: pid=241555: Tue Nov 19 00:54:52 2024 00:10:45.871 read: IOPS=3100, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1004msec) 00:10:45.871 slat (nsec): min=1646, max=2606.0k, avg=153089.37, stdev=418979.46 00:10:45.871 clat (usec): min=3122, max=21931, avg=19297.74, stdev=2286.95 00:10:45.871 lat (usec): min=3975, max=22489, avg=19450.83, stdev=2263.80 00:10:45.871 clat percentiles (usec): 00:10:45.871 | 1.00th=[ 8291], 5.00th=[16057], 10.00th=[16319], 20.00th=[16909], 00:10:45.871 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20317], 60.00th=[20579], 00:10:45.871 | 70.00th=[20579], 80.00th=[20841], 90.00th=[20841], 95.00th=[21103], 00:10:45.871 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21890], 99.95th=[21890], 00:10:45.871 | 99.99th=[21890] 00:10:45.871 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:10:45.871 slat (usec): min=2, max=2534, avg=143.12, stdev=382.80 00:10:45.871 clat (usec): min=9057, max=22647, avg=18570.22, stdev=1515.39 00:10:45.871 lat (usec): min=9905, max=22650, avg=18713.33, stdev=1476.30 00:10:45.871 clat percentiles (usec): 00:10:45.871 | 1.00th=[14484], 5.00th=[15795], 10.00th=[16057], 20.00th=[17171], 00:10:45.871 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[19268], 00:10:45.871 | 70.00th=[19268], 80.00th=[19530], 90.00th=[19792], 95.00th=[20055], 00:10:45.871 | 99.00th=[20841], 99.50th=[21103], 99.90th=[22676], 99.95th=[22676], 00:10:45.871 | 99.99th=[22676] 00:10:45.871 bw ( KiB/s): min=12552, max=15432, per=15.79%, avg=13992.00, stdev=2036.47, samples=2 00:10:45.871 iops : min= 3138, max= 3858, avg=3498.00, stdev=509.12, samples=2 00:10:45.871 lat (msec) : 4=0.07%, 10=0.67%, 20=68.25%, 50=31.00% 00:10:45.871 cpu : usr=1.30%, sys=3.49%, ctx=2257, majf=0, minf=1 00:10:45.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.871 issued rwts: total=3113,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.871 job3: (groupid=0, jobs=1): err= 0: pid=241556: Tue Nov 19 00:54:52 2024 00:10:45.871 read: IOPS=3098, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1005msec) 00:10:45.871 slat (nsec): min=1663, max=2720.5k, avg=152926.68, stdev=452198.32 00:10:45.871 clat (usec): min=3121, max=22462, avg=19308.11, stdev=2281.92 00:10:45.871 lat (usec): min=3996, max=22467, avg=19461.03, stdev=2250.22 00:10:45.871 clat percentiles (usec): 00:10:45.871 | 1.00th=[ 8291], 5.00th=[16057], 10.00th=[16319], 20.00th=[16909], 00:10:45.871 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20317], 60.00th=[20579], 00:10:45.871 | 70.00th=[20579], 80.00th=[20841], 90.00th=[20841], 95.00th=[21103], 00:10:45.871 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21890], 99.95th=[21890], 00:10:45.871 | 99.99th=[22414] 00:10:45.871 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:10:45.871 slat (usec): min=2, max=2587, avg=143.22, stdev=428.78 00:10:45.871 clat (usec): min=9931, max=22538, avg=18566.25, stdev=1507.11 00:10:45.871 lat (usec): min=9939, max=22542, avg=18709.47, stdev=1454.35 00:10:45.871 clat percentiles (usec): 00:10:45.871 | 1.00th=[14615], 5.00th=[15795], 10.00th=[16057], 20.00th=[17171], 00:10:45.871 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[19268], 00:10:45.871 | 70.00th=[19268], 80.00th=[19530], 90.00th=[19792], 95.00th=[20055], 00:10:45.871 | 99.00th=[20579], 99.50th=[20841], 99.90th=[22414], 99.95th=[22414], 00:10:45.871 | 99.99th=[22414] 00:10:45.871 bw ( KiB/s): min=12552, max=15440, per=15.80%, avg=13996.00, stdev=2042.12, samples=2 00:10:45.871 iops : min= 3138, max= 3860, avg=3499.00, stdev=510.53, samples=2 00:10:45.871 lat (msec) : 4=0.03%, 10=0.69%, 20=67.66%, 50=31.62% 00:10:45.871 cpu : usr=1.59%, sys=3.19%, ctx=2277, majf=0, minf=1 00:10:45.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.871 issued rwts: total=3114,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.871 00:10:45.871 Run status group 0 (all jobs): 00:10:45.871 READ: bw=79.9MiB/s (83.8MB/s), 12.1MiB/s-37.8MiB/s (12.7MB/s-39.7MB/s), io=80.3MiB (84.2MB), run=1001-1005msec 00:10:45.871 WRITE: bw=86.5MiB/s (90.7MB/s), 13.9MiB/s-39.7MiB/s (14.6MB/s-41.6MB/s), io=86.9MiB (91.2MB), run=1001-1005msec 00:10:45.871 00:10:45.871 Disk stats (read/write): 00:10:45.871 nvme0n1: ios=3122/3518, merge=0/0, ticks=13334/13801, in_queue=27135, util=86.37% 00:10:45.871 nvme0n2: ios=8704/9126, merge=0/0, ticks=13383/13177, in_queue=26560, util=86.59% 00:10:45.871 nvme0n3: ios=2560/3016, merge=0/0, ticks=12984/14017, in_queue=27001, util=88.84% 00:10:45.871 nvme0n4: ios=2560/3017, merge=0/0, ticks=12962/14024, in_queue=26986, util=89.68% 00:10:45.871 00:54:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:45.871 [global] 00:10:45.871 thread=1 00:10:45.871 invalidate=1 00:10:45.871 rw=randwrite 00:10:45.871 time_based=1 00:10:45.871 runtime=1 00:10:45.871 ioengine=libaio 00:10:45.871 direct=1 00:10:45.871 bs=4096 00:10:45.871 iodepth=128 00:10:45.871 norandommap=0 00:10:45.871 numjobs=1 00:10:45.871 00:10:45.871 verify_dump=1 00:10:45.872 verify_backlog=512 00:10:45.872 verify_state_save=0 00:10:45.872 do_verify=1 00:10:45.872 verify=crc32c-intel 00:10:45.872 [job0] 00:10:45.872 filename=/dev/nvme0n1 00:10:45.872 [job1] 00:10:45.872 filename=/dev/nvme0n2 00:10:45.872 [job2] 00:10:45.872 filename=/dev/nvme0n3 00:10:45.872 [job3] 00:10:45.872 filename=/dev/nvme0n4 00:10:45.872 Could not set queue depth (nvme0n1) 00:10:45.872 Could not set queue depth (nvme0n2) 00:10:45.872 Could not set queue depth (nvme0n3) 00:10:45.872 Could not set queue depth (nvme0n4) 00:10:46.129 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.129 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.129 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.129 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.129 fio-3.35 00:10:46.129 Starting 4 threads 00:10:47.500 00:10:47.500 job0: (groupid=0, jobs=1): err= 0: pid=241921: Tue Nov 19 00:54:53 2024 00:10:47.500 read: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1006msec) 00:10:47.500 slat (nsec): min=1513, max=2547.6k, avg=57071.77, stdev=227572.18 00:10:47.500 clat (usec): min=6286, max=13134, avg=7522.70, stdev=446.31 00:10:47.500 lat (usec): min=6303, max=13136, avg=7579.77, stdev=466.24 00:10:47.500 clat percentiles (usec): 00:10:47.501 | 1.00th=[ 6652], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 7373], 00:10:47.501 | 30.00th=[ 7439], 40.00th=[ 7439], 50.00th=[ 7504], 60.00th=[ 7570], 00:10:47.501 | 70.00th=[ 7570], 80.00th=[ 7635], 90.00th=[ 7701], 95.00th=[ 7832], 00:10:47.501 | 99.00th=[ 9241], 99.50th=[10814], 99.90th=[12387], 99.95th=[12518], 00:10:47.501 | 99.99th=[13173] 00:10:47.501 write: IOPS=8698, BW=34.0MiB/s (35.6MB/s)(34.2MiB/1006msec); 0 zone resets 00:10:47.501 slat (usec): min=2, max=2447, avg=54.34, stdev=212.02 00:10:47.501 clat (usec): min=2592, max=9545, avg=7103.28, stdev=442.70 00:10:47.501 lat (usec): min=2600, max=9557, avg=7157.62, stdev=462.93 00:10:47.501 clat percentiles (usec): 00:10:47.501 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 6980], 00:10:47.501 | 30.00th=[ 7046], 40.00th=[ 7111], 50.00th=[ 7111], 60.00th=[ 7177], 00:10:47.501 | 70.00th=[ 7242], 80.00th=[ 7308], 90.00th=[ 7439], 95.00th=[ 7504], 00:10:47.501 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 9241], 99.95th=[ 9372], 00:10:47.501 | 99.99th=[ 9503] 00:10:47.501 bw ( KiB/s): min=33128, max=36504, per=42.96%, avg=34816.00, stdev=2387.19, samples=2 00:10:47.501 iops : min= 8282, max= 9126, avg=8704.00, stdev=596.80, samples=2 00:10:47.501 lat (msec) : 4=0.19%, 10=99.47%, 20=0.34% 00:10:47.501 cpu : usr=3.38%, sys=7.06%, ctx=1162, majf=0, minf=1 00:10:47.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:47.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.501 issued rwts: total=8704,8751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.501 job1: (groupid=0, jobs=1): err= 0: pid=241922: Tue Nov 19 00:54:53 2024 00:10:47.501 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:10:47.501 slat (nsec): min=1657, max=6354.9k, avg=227748.79, stdev=1034575.80 00:10:47.501 clat (usec): min=21930, max=34693, avg=28873.68, stdev=1112.99 00:10:47.501 lat (usec): min=27352, max=35428, avg=29101.43, stdev=773.11 00:10:47.501 clat percentiles (usec): 00:10:47.501 | 1.00th=[23200], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:10:47.501 | 30.00th=[28705], 40.00th=[28705], 50.00th=[28967], 60.00th=[29230], 00:10:47.501 | 70.00th=[29230], 80.00th=[29492], 90.00th=[29754], 95.00th=[30016], 00:10:47.501 | 99.00th=[30278], 99.50th=[30278], 99.90th=[34341], 99.95th=[34341], 00:10:47.501 | 99.99th=[34866] 00:10:47.501 write: IOPS=2401, BW=9606KiB/s (9837kB/s)(9664KiB/1006msec); 0 zone resets 00:10:47.501 slat (usec): min=2, max=6670, avg=216.66, stdev=957.15 00:10:47.501 clat (usec): min=5474, max=34628, avg=27997.05, stdev=3035.80 00:10:47.501 lat (usec): min=7071, max=34637, avg=28213.71, stdev=2915.94 00:10:47.501 clat percentiles (usec): 00:10:47.501 | 1.00th=[12518], 5.00th=[22938], 10.00th=[27395], 20.00th=[27657], 00:10:47.501 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:10:47.501 | 70.00th=[28967], 80.00th=[29230], 90.00th=[29754], 95.00th=[30016], 00:10:47.501 | 99.00th=[30540], 99.50th=[30540], 99.90th=[34866], 99.95th=[34866], 00:10:47.501 | 99.99th=[34866] 00:10:47.501 bw ( KiB/s): min= 8760, max= 9552, per=11.30%, avg=9156.00, stdev=560.03, samples=2 00:10:47.501 iops : min= 2190, max= 2388, avg=2289.00, stdev=140.01, samples=2 00:10:47.501 lat (msec) : 10=0.52%, 20=0.94%, 50=98.54% 00:10:47.501 cpu : usr=1.69%, sys=2.09%, ctx=424, majf=0, minf=1 00:10:47.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:47.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.501 issued rwts: total=2048,2416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.501 job2: (groupid=0, jobs=1): err= 0: pid=241923: Tue Nov 19 00:54:53 2024 00:10:47.501 read: IOPS=4479, BW=17.5MiB/s (18.3MB/s)(17.6MiB/1003msec) 00:10:47.501 slat (nsec): min=1580, max=1074.5k, avg=111723.45, stdev=284596.40 00:10:47.501 clat (usec): min=2333, max=16986, avg=14270.08, stdev=1073.99 00:10:47.501 lat (usec): min=3222, max=16991, avg=14381.80, stdev=1037.66 00:10:47.501 clat percentiles (usec): 00:10:47.501 | 1.00th=[ 8029], 5.00th=[13566], 10.00th=[13698], 20.00th=[13960], 00:10:47.501 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14484], 60.00th=[14615], 00:10:47.501 | 70.00th=[14615], 80.00th=[14615], 90.00th=[14746], 95.00th=[14877], 00:10:47.501 | 99.00th=[15270], 99.50th=[15401], 99.90th=[16188], 99.95th=[16909], 00:10:47.501 | 99.99th=[16909] 00:10:47.501 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:47.501 slat (usec): min=2, max=2457, avg=104.62, stdev=268.92 00:10:47.501 clat (usec): min=11493, max=15272, avg=13581.93, stdev=380.98 00:10:47.501 lat (usec): min=11530, max=15275, avg=13686.54, stdev=278.16 00:10:47.501 clat percentiles (usec): 00:10:47.501 | 1.00th=[12518], 5.00th=[12780], 10.00th=[13042], 20.00th=[13435], 00:10:47.501 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:10:47.501 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14222], 00:10:47.501 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14877], 99.95th=[15139], 00:10:47.501 | 99.99th=[15270] 00:10:47.501 bw ( KiB/s): min=17672, max=19192, per=22.74%, avg=18432.00, stdev=1074.80, samples=2 00:10:47.501 iops : min= 4418, max= 4798, avg=4608.00, stdev=268.70, samples=2 00:10:47.501 lat (msec) : 4=0.09%, 10=0.63%, 20=99.29% 00:10:47.501 cpu : usr=2.40%, sys=4.09%, ctx=1357, majf=0, minf=1 00:10:47.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:47.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.501 issued rwts: total=4493,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.501 job3: (groupid=0, jobs=1): err= 0: pid=241924: Tue Nov 19 00:54:53 2024 00:10:47.501 read: IOPS=4482, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1003msec) 00:10:47.501 slat (nsec): min=1507, max=1035.5k, avg=111704.12, stdev=284818.45 00:10:47.501 clat (usec): min=2352, max=16955, avg=14267.24, stdev=1098.32 00:10:47.501 lat (usec): min=3197, max=16961, avg=14378.95, stdev=1063.59 00:10:47.501 clat percentiles (usec): 00:10:47.501 | 1.00th=[ 7177], 5.00th=[13566], 10.00th=[13698], 20.00th=[14091], 00:10:47.501 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14484], 60.00th=[14615], 00:10:47.501 | 70.00th=[14615], 80.00th=[14615], 90.00th=[14746], 95.00th=[14877], 00:10:47.501 | 99.00th=[15270], 99.50th=[15270], 99.90th=[16909], 99.95th=[16909], 00:10:47.501 | 99.99th=[16909] 00:10:47.501 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:47.501 slat (usec): min=2, max=2459, avg=104.59, stdev=268.82 00:10:47.501 clat (usec): min=11494, max=14594, avg=13577.34, stdev=372.16 00:10:47.501 lat (usec): min=11538, max=15006, avg=13681.93, stdev=265.46 00:10:47.501 clat percentiles (usec): 00:10:47.501 | 1.00th=[12518], 5.00th=[12780], 10.00th=[13042], 20.00th=[13435], 00:10:47.501 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:10:47.501 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14222], 00:10:47.501 | 99.00th=[14353], 99.50th=[14353], 99.90th=[14484], 99.95th=[14615], 00:10:47.501 | 99.99th=[14615] 00:10:47.501 bw ( KiB/s): min=17680, max=19184, per=22.74%, avg=18432.00, stdev=1063.49, samples=2 00:10:47.501 iops : min= 4420, max= 4796, avg=4608.00, stdev=265.87, samples=2 00:10:47.501 lat (msec) : 4=0.14%, 10=0.57%, 20=99.29% 00:10:47.501 cpu : usr=1.90%, sys=4.59%, ctx=1289, majf=0, minf=1 00:10:47.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:47.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.501 issued rwts: total=4496,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.501 00:10:47.501 Run status group 0 (all jobs): 00:10:47.501 READ: bw=76.7MiB/s (80.4MB/s), 8143KiB/s-33.8MiB/s (8339kB/s-35.4MB/s), io=77.1MiB (80.9MB), run=1003-1006msec 00:10:47.501 WRITE: bw=79.1MiB/s (83.0MB/s), 9606KiB/s-34.0MiB/s (9837kB/s-35.6MB/s), io=79.6MiB (83.5MB), run=1003-1006msec 00:10:47.501 00:10:47.501 Disk stats (read/write): 00:10:47.501 nvme0n1: ios=7218/7616, merge=0/0, ticks=52829/53230, in_queue=106059, util=86.47% 00:10:47.501 nvme0n2: ios=1722/2048, merge=0/0, ticks=12358/14242, in_queue=26600, util=86.79% 00:10:47.501 nvme0n3: ios=3646/4096, merge=0/0, ticks=13187/13822, in_queue=27009, util=88.96% 00:10:47.501 nvme0n4: ios=3645/4096, merge=0/0, ticks=13169/13817, in_queue=26986, util=89.71% 00:10:47.501 00:54:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:47.501 00:54:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=242149 00:10:47.501 00:54:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:47.501 00:54:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:47.501 [global] 00:10:47.501 thread=1 00:10:47.501 invalidate=1 00:10:47.501 rw=read 00:10:47.501 time_based=1 00:10:47.501 runtime=10 00:10:47.501 ioengine=libaio 00:10:47.501 direct=1 00:10:47.501 bs=4096 00:10:47.501 iodepth=1 00:10:47.501 norandommap=1 00:10:47.501 numjobs=1 00:10:47.501 00:10:47.501 [job0] 00:10:47.501 filename=/dev/nvme0n1 00:10:47.501 [job1] 00:10:47.502 filename=/dev/nvme0n2 00:10:47.502 [job2] 00:10:47.502 filename=/dev/nvme0n3 00:10:47.502 [job3] 00:10:47.502 filename=/dev/nvme0n4 00:10:47.502 Could not set queue depth (nvme0n1) 00:10:47.502 Could not set queue depth (nvme0n2) 00:10:47.502 Could not set queue depth (nvme0n3) 00:10:47.502 Could not set queue depth (nvme0n4) 00:10:47.502 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.502 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.502 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.502 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.502 fio-3.35 00:10:47.502 Starting 4 threads 00:10:50.780 00:54:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:50.780 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=66330624, buflen=4096 00:10:50.780 fio: pid=242299, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:50.780 00:54:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:50.780 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=83460096, buflen=4096 00:10:50.780 fio: pid=242298, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:50.780 00:54:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.780 00:54:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:50.780 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53313536, buflen=4096 00:10:50.780 fio: pid=242296, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:51.037 00:54:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.037 00:54:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:51.296 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=28459008, buflen=4096 00:10:51.296 fio: pid=242297, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:51.296 00:10:51.296 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=242296: Tue Nov 19 00:54:57 2024 00:10:51.296 read: IOPS=9369, BW=36.6MiB/s (38.4MB/s)(115MiB/3138msec) 00:10:51.296 slat (usec): min=6, max=15916, avg= 9.31, stdev=169.37 00:10:51.296 clat (usec): min=62, max=21374, avg=95.56, stdev=142.12 00:10:51.296 lat (usec): min=69, max=21381, avg=104.87, stdev=221.43 00:10:51.296 clat percentiles (usec): 00:10:51.296 | 1.00th=[ 76], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 89], 00:10:51.296 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 95], 00:10:51.296 | 70.00th=[ 96], 80.00th=[ 98], 90.00th=[ 102], 95.00th=[ 106], 00:10:51.296 | 99.00th=[ 135], 99.50th=[ 143], 99.90th=[ 155], 99.95th=[ 165], 00:10:51.296 | 99.99th=[ 7701] 00:10:51.296 bw ( KiB/s): min=32593, max=39360, per=37.27%, avg=37953.50, stdev=2650.02, samples=6 00:10:51.296 iops : min= 8148, max= 9840, avg=9488.33, stdev=662.61, samples=6 00:10:51.296 lat (usec) : 100=86.06%, 250=13.91%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:51.296 lat (msec) : 10=0.01%, 50=0.01% 00:10:51.296 cpu : usr=3.41%, sys=10.68%, ctx=29406, majf=0, minf=1 00:10:51.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.296 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.296 issued rwts: total=29401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.296 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=242297: Tue Nov 19 00:54:57 2024 00:10:51.296 read: IOPS=6651, BW=26.0MiB/s (27.2MB/s)(91.1MiB/3508msec) 00:10:51.296 slat (usec): min=6, max=19895, avg=13.12, stdev=264.17 00:10:51.296 clat (usec): min=50, max=842, avg=135.44, stdev=48.84 00:10:51.296 lat (usec): min=69, max=19985, avg=148.56, stdev=268.60 00:10:51.296 clat percentiles (usec): 00:10:51.296 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 73], 20.00th=[ 79], 00:10:51.296 | 30.00th=[ 94], 40.00th=[ 131], 50.00th=[ 141], 60.00th=[ 147], 00:10:51.296 | 70.00th=[ 161], 80.00th=[ 178], 90.00th=[ 192], 95.00th=[ 229], 00:10:51.296 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 258], 99.95th=[ 265], 00:10:51.296 | 99.99th=[ 494] 00:10:51.296 bw ( KiB/s): min=19544, max=32426, per=23.75%, avg=24185.67, stdev=4891.29, samples=6 00:10:51.296 iops : min= 4886, max= 8106, avg=6046.33, stdev=1222.65, samples=6 00:10:51.296 lat (usec) : 100=33.41%, 250=66.09%, 500=0.49%, 750=0.01%, 1000=0.01% 00:10:51.296 cpu : usr=2.62%, sys=9.69%, ctx=23340, majf=0, minf=2 00:10:51.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.296 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.296 issued rwts: total=23333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.296 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=242298: Tue Nov 19 00:54:57 2024 00:10:51.296 read: IOPS=6956, BW=27.2MiB/s (28.5MB/s)(79.6MiB/2929msec) 00:10:51.296 slat (usec): min=4, max=15853, avg= 8.88, stdev=137.93 00:10:51.296 clat (usec): min=79, max=650, avg=132.66, stdev=37.02 00:10:51.296 lat (usec): min=95, max=15995, avg=141.54, stdev=142.86 00:10:51.296 clat percentiles (usec): 00:10:51.296 | 1.00th=[ 94], 5.00th=[ 98], 10.00th=[ 100], 20.00th=[ 102], 00:10:51.296 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 113], 60.00th=[ 120], 00:10:51.296 | 70.00th=[ 165], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 196], 00:10:51.296 | 99.00th=[ 217], 99.50th=[ 221], 99.90th=[ 237], 99.95th=[ 245], 00:10:51.296 | 99.99th=[ 262] 00:10:51.296 bw ( KiB/s): min=22144, max=34920, per=26.99%, avg=27480.00, stdev=5770.28, samples=5 00:10:51.296 iops : min= 5536, max= 8730, avg=6870.00, stdev=1442.57, samples=5 00:10:51.296 lat (usec) : 100=11.70%, 250=88.26%, 500=0.03%, 750=0.01% 00:10:51.296 cpu : usr=1.78%, sys=9.02%, ctx=20387, majf=0, minf=2 00:10:51.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.296 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.296 issued rwts: total=20377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.296 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=242299: Tue Nov 19 00:54:57 2024 00:10:51.296 read: IOPS=5998, BW=23.4MiB/s (24.6MB/s)(63.3MiB/2700msec) 00:10:51.296 slat (nsec): min=6297, max=39465, avg=7626.40, stdev=1218.27 00:10:51.296 clat (usec): min=90, max=1000, avg=157.38, stdev=26.23 00:10:51.296 lat (usec): min=97, max=1007, avg=165.01, stdev=26.13 00:10:51.296 clat percentiles (usec): 00:10:51.296 | 1.00th=[ 102], 5.00th=[ 118], 10.00th=[ 130], 20.00th=[ 139], 00:10:51.296 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 163], 00:10:51.296 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 202], 00:10:51.296 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 237], 99.95th=[ 241], 00:10:51.296 | 99.99th=[ 529] 00:10:51.296 bw ( KiB/s): min=22088, max=26416, per=23.53%, avg=23956.80, stdev=1951.43, samples=5 00:10:51.296 iops : min= 5522, max= 6604, avg=5989.20, stdev=487.86, samples=5 00:10:51.296 lat (usec) : 100=0.63%, 250=99.35%, 500=0.01%, 750=0.01% 00:10:51.296 lat (msec) : 2=0.01% 00:10:51.296 cpu : usr=1.93%, sys=7.52%, ctx=16195, majf=0, minf=2 00:10:51.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.296 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.296 issued rwts: total=16195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.296 00:10:51.296 Run status group 0 (all jobs): 00:10:51.296 READ: bw=99.4MiB/s (104MB/s), 23.4MiB/s-36.6MiB/s (24.6MB/s-38.4MB/s), io=349MiB (366MB), run=2700-3508msec 00:10:51.296 00:10:51.296 Disk stats (read/write): 00:10:51.296 nvme0n1: ios=29279/0, merge=0/0, ticks=2613/0, in_queue=2613, util=93.99% 00:10:51.296 nvme0n2: ios=21559/0, merge=0/0, ticks=2901/0, in_queue=2901, util=93.68% 00:10:51.296 nvme0n3: ios=19997/0, merge=0/0, ticks=2519/0, in_queue=2519, util=95.64% 00:10:51.296 nvme0n4: ios=15644/0, merge=0/0, ticks=2336/0, in_queue=2336, util=96.41% 00:10:51.554 00:54:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.554 00:54:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:51.812 00:54:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.812 00:54:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:52.377 00:54:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.377 00:54:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:52.651 00:54:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.651 00:54:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:52.908 00:54:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.908 00:54:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:53.166 00:54:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:53.166 00:54:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 242149 00:10:53.166 00:54:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:53.166 00:54:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.098 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.098 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:54.098 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:54.098 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.098 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:54.098 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.098 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:54.098 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:54.098 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:54.098 nvmf hotplug test: fio failed as expected 00:10:54.098 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:54.357 rmmod nvme_rdma 00:10:54.357 rmmod nvme_fabrics 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 239269 ']' 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 239269 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 239269 ']' 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 239269 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 239269 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 239269' 00:10:54.357 killing process with pid 239269 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 239269 00:10:54.357 00:55:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 239269 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:55.733 00:10:55.733 real 0m27.660s 00:10:55.733 user 1m59.244s 00:10:55.733 sys 0m8.805s 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.733 ************************************ 00:10:55.733 END TEST nvmf_fio_target 00:10:55.733 ************************************ 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.733 ************************************ 00:10:55.733 START TEST nvmf_bdevio 00:10:55.733 ************************************ 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:55.733 * Looking for test storage... 00:10:55.733 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:55.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.733 --rc genhtml_branch_coverage=1 00:10:55.733 --rc genhtml_function_coverage=1 00:10:55.733 --rc genhtml_legend=1 00:10:55.733 --rc geninfo_all_blocks=1 00:10:55.733 --rc geninfo_unexecuted_blocks=1 00:10:55.733 00:10:55.733 ' 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:55.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.733 --rc genhtml_branch_coverage=1 00:10:55.733 --rc genhtml_function_coverage=1 00:10:55.733 --rc genhtml_legend=1 00:10:55.733 --rc geninfo_all_blocks=1 00:10:55.733 --rc geninfo_unexecuted_blocks=1 00:10:55.733 00:10:55.733 ' 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:55.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.733 --rc genhtml_branch_coverage=1 00:10:55.733 --rc genhtml_function_coverage=1 00:10:55.733 --rc genhtml_legend=1 00:10:55.733 --rc geninfo_all_blocks=1 00:10:55.733 --rc geninfo_unexecuted_blocks=1 00:10:55.733 00:10:55.733 ' 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:55.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.733 --rc genhtml_branch_coverage=1 00:10:55.733 --rc genhtml_function_coverage=1 00:10:55.733 --rc genhtml_legend=1 00:10:55.733 --rc geninfo_all_blocks=1 00:10:55.733 --rc geninfo_unexecuted_blocks=1 00:10:55.733 00:10:55.733 ' 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.733 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.993 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:55.993 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:55.993 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.994 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.994 00:55:02 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:02.581 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:02.581 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@405 -- # modinfo irdma 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:02.581 Found net devices under 0000:af:00.0: cvl_0_0 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:02.581 Found net devices under 0000:af:00.1: cvl_0_1 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:02.581 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:11:02.582 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:02.582 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:11:02.582 altname enp175s0f0np0 00:11:02.582 altname ens801f0np0 00:11:02.582 inet 192.168.100.8/24 scope global cvl_0_0 00:11:02.582 valid_lft forever preferred_lft forever 00:11:02.582 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:11:02.582 valid_lft forever preferred_lft forever 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:11:02.582 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:02.582 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:11:02.582 altname enp175s0f1np1 00:11:02.582 altname ens801f1np1 00:11:02.582 inet 192.168.100.9/24 scope global cvl_0_1 00:11:02.582 valid_lft forever preferred_lft forever 00:11:02.582 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:11:02.582 valid_lft forever preferred_lft forever 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:02.582 192.168.100.9' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:02.582 192.168.100.9' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:02.582 192.168.100.9' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=246727 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 246727 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 246727 ']' 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.582 00:55:08 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.582 [2024-11-19 00:55:08.377049] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:02.582 [2024-11-19 00:55:08.377149] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.582 [2024-11-19 00:55:08.506849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.582 [2024-11-19 00:55:08.615493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.582 [2024-11-19 00:55:08.615540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.582 [2024-11-19 00:55:08.615551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.582 [2024-11-19 00:55:08.615561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.582 [2024-11-19 00:55:08.615569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.582 [2024-11-19 00:55:08.618054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:02.582 [2024-11-19 00:55:08.618135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:02.582 [2024-11-19 00:55:08.618207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.582 [2024-11-19 00:55:08.618230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:02.582 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.582 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:02.582 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.583 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.583 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.583 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.583 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:02.583 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.583 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.583 [2024-11-19 00:55:09.240135] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:11:02.583 [2024-11-19 00:55:09.249730] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:11:02.583 [2024-11-19 00:55:09.249758] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:11:02.583 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.583 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:02.583 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.583 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.842 Malloc0 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.842 [2024-11-19 00:55:09.370586] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:02.842 { 00:11:02.842 "params": { 00:11:02.842 "name": "Nvme$subsystem", 00:11:02.842 "trtype": "$TEST_TRANSPORT", 00:11:02.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:02.842 "adrfam": "ipv4", 00:11:02.842 "trsvcid": "$NVMF_PORT", 00:11:02.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:02.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:02.842 "hdgst": ${hdgst:-false}, 00:11:02.842 "ddgst": ${ddgst:-false} 00:11:02.842 }, 00:11:02.842 "method": "bdev_nvme_attach_controller" 00:11:02.842 } 00:11:02.842 EOF 00:11:02.842 )") 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:02.842 00:55:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:02.842 "params": { 00:11:02.842 "name": "Nvme1", 00:11:02.842 "trtype": "rdma", 00:11:02.842 "traddr": "192.168.100.8", 00:11:02.842 "adrfam": "ipv4", 00:11:02.842 "trsvcid": "4420", 00:11:02.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:02.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:02.842 "hdgst": false, 00:11:02.842 "ddgst": false 00:11:02.842 }, 00:11:02.842 "method": "bdev_nvme_attach_controller" 00:11:02.842 }' 00:11:02.842 [2024-11-19 00:55:09.448566] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:02.842 [2024-11-19 00:55:09.448648] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246975 ] 00:11:03.100 [2024-11-19 00:55:09.572306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.100 [2024-11-19 00:55:09.691938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.100 [2024-11-19 00:55:09.692010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.100 [2024-11-19 00:55:09.692032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.665 I/O targets: 00:11:03.665 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:03.665 00:11:03.665 00:11:03.665 CUnit - A unit testing framework for C - Version 2.1-3 00:11:03.665 http://cunit.sourceforge.net/ 00:11:03.665 00:11:03.665 00:11:03.665 Suite: bdevio tests on: Nvme1n1 00:11:03.665 Test: blockdev write read block ...passed 00:11:03.665 Test: blockdev write zeroes read block ...passed 00:11:03.665 Test: blockdev write zeroes read no split ...passed 00:11:03.665 Test: blockdev write zeroes read split ...passed 00:11:03.665 Test: blockdev write zeroes read split partial ...passed 00:11:03.665 Test: blockdev reset ...[2024-11-19 00:55:10.217013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:03.665 [2024-11-19 00:55:10.259226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:11:03.665 [2024-11-19 00:55:10.289725] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:03.665 passed 00:11:03.665 Test: blockdev write read 8 blocks ...passed 00:11:03.665 Test: blockdev write read size > 128k ...passed 00:11:03.665 Test: blockdev write read invalid size ...passed 00:11:03.665 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.665 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.665 Test: blockdev write read max offset ...passed 00:11:03.665 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.665 Test: blockdev writev readv 8 blocks ...passed 00:11:03.665 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.665 Test: blockdev writev readv block ...passed 00:11:03.665 Test: blockdev writev readv size > 128k ...passed 00:11:03.665 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.665 Test: blockdev comparev and writev ...[2024-11-19 00:55:10.298982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.665 [2024-11-19 00:55:10.299020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:03.665 [2024-11-19 00:55:10.299036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.665 [2024-11-19 00:55:10.299049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:03.665 [2024-11-19 00:55:10.299248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.665 [2024-11-19 00:55:10.299263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:11:03.665 [2024-11-19 00:55:10.299275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.665 [2024-11-19 00:55:10.299289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:11:03.665 [2024-11-19 00:55:10.299485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.665 [2024-11-19 00:55:10.299503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:11:03.665 [2024-11-19 00:55:10.299515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.665 [2024-11-19 00:55:10.299527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:03.665 [2024-11-19 00:55:10.299726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.665 [2024-11-19 00:55:10.299744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:03.665 [2024-11-19 00:55:10.299755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.665 [2024-11-19 00:55:10.299771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:03.665 passed 00:11:03.665 Test: blockdev nvme passthru rw ...passed 00:11:03.665 Test: blockdev nvme passthru vendor specific ...[2024-11-19 00:55:10.300169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:03.665 [2024-11-19 00:55:10.300189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:11:03.665 [2024-11-19 00:55:10.300247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:03.665 [2024-11-19 00:55:10.300260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:11:03.665 [2024-11-19 00:55:10.300329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:03.665 [2024-11-19 00:55:10.300344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:11:03.665 [2024-11-19 00:55:10.300403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:03.665 [2024-11-19 00:55:10.300417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:11:03.665 passed 00:11:03.665 Test: blockdev nvme admin passthru ...passed 00:11:03.665 Test: blockdev copy ...passed 00:11:03.665 00:11:03.665 Run Summary: Type Total Ran Passed Failed Inactive 00:11:03.665 suites 1 1 n/a 0 0 00:11:03.665 tests 23 23 23 0 0 00:11:03.665 asserts 152 152 152 0 n/a 00:11:03.665 00:11:03.665 Elapsed time = 0.429 seconds 00:11:04.598 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.598 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:04.599 rmmod nvme_rdma 00:11:04.599 rmmod nvme_fabrics 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 246727 ']' 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 246727 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 246727 ']' 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 246727 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.599 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 246727 00:11:04.857 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:04.857 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:04.857 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 246727' 00:11:04.857 killing process with pid 246727 00:11:04.857 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 246727 00:11:04.857 00:55:11 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 246727 00:11:06.234 00:55:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.234 00:55:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:06.234 00:11:06.234 real 0m10.481s 00:11:06.234 user 0m21.251s 00:11:06.234 sys 0m5.154s 00:11:06.234 00:55:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.234 00:55:12 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.234 ************************************ 00:11:06.234 END TEST nvmf_bdevio 00:11:06.234 ************************************ 00:11:06.234 00:55:12 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:06.234 00:11:06.234 real 4m21.583s 00:11:06.234 user 11m54.379s 00:11:06.234 sys 1m25.487s 00:11:06.234 00:55:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.234 00:55:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.234 ************************************ 00:11:06.234 END TEST nvmf_target_core 00:11:06.234 ************************************ 00:11:06.234 00:55:12 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:06.234 00:55:12 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.234 00:55:12 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.234 00:55:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:06.234 ************************************ 00:11:06.234 START TEST nvmf_target_extra 00:11:06.234 ************************************ 00:11:06.234 00:55:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:06.234 * Looking for test storage... 00:11:06.234 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf 00:11:06.234 00:55:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.234 00:55:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.234 00:55:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:06.494 00:55:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.494 00:55:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:06.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.494 --rc genhtml_branch_coverage=1 00:11:06.494 --rc genhtml_function_coverage=1 00:11:06.494 --rc genhtml_legend=1 00:11:06.494 --rc geninfo_all_blocks=1 00:11:06.494 --rc geninfo_unexecuted_blocks=1 00:11:06.494 00:11:06.494 ' 00:11:06.494 00:55:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:06.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.494 --rc genhtml_branch_coverage=1 00:11:06.494 --rc genhtml_function_coverage=1 00:11:06.494 --rc genhtml_legend=1 00:11:06.494 --rc geninfo_all_blocks=1 00:11:06.494 --rc geninfo_unexecuted_blocks=1 00:11:06.494 00:11:06.494 ' 00:11:06.494 00:55:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:06.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.494 --rc genhtml_branch_coverage=1 00:11:06.494 --rc genhtml_function_coverage=1 00:11:06.494 --rc genhtml_legend=1 00:11:06.494 --rc geninfo_all_blocks=1 00:11:06.494 --rc geninfo_unexecuted_blocks=1 00:11:06.494 00:11:06.494 ' 00:11:06.494 00:55:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:06.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.494 --rc genhtml_branch_coverage=1 00:11:06.494 --rc genhtml_function_coverage=1 00:11:06.494 --rc genhtml_legend=1 00:11:06.494 --rc geninfo_all_blocks=1 00:11:06.494 --rc geninfo_unexecuted_blocks=1 00:11:06.494 00:11:06.494 ' 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.495 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:06.495 ************************************ 00:11:06.495 START TEST nvmf_example 00:11:06.495 ************************************ 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:06.495 * Looking for test storage... 00:11:06.495 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.495 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:06.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.756 --rc genhtml_branch_coverage=1 00:11:06.756 --rc genhtml_function_coverage=1 00:11:06.756 --rc genhtml_legend=1 00:11:06.756 --rc geninfo_all_blocks=1 00:11:06.756 --rc geninfo_unexecuted_blocks=1 00:11:06.756 00:11:06.756 ' 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:06.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.756 --rc genhtml_branch_coverage=1 00:11:06.756 --rc genhtml_function_coverage=1 00:11:06.756 --rc genhtml_legend=1 00:11:06.756 --rc geninfo_all_blocks=1 00:11:06.756 --rc geninfo_unexecuted_blocks=1 00:11:06.756 00:11:06.756 ' 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:06.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.756 --rc genhtml_branch_coverage=1 00:11:06.756 --rc genhtml_function_coverage=1 00:11:06.756 --rc genhtml_legend=1 00:11:06.756 --rc geninfo_all_blocks=1 00:11:06.756 --rc geninfo_unexecuted_blocks=1 00:11:06.756 00:11:06.756 ' 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:06.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.756 --rc genhtml_branch_coverage=1 00:11:06.756 --rc genhtml_function_coverage=1 00:11:06.756 --rc genhtml_legend=1 00:11:06.756 --rc geninfo_all_blocks=1 00:11:06.756 --rc geninfo_unexecuted_blocks=1 00:11:06.756 00:11:06.756 ' 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.756 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.757 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:06.757 00:55:13 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.334 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:13.335 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:13.335 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@405 -- # modinfo irdma 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:13.335 Found net devices under 0000:af:00.0: cvl_0_0 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:13.335 Found net devices under 0000:af:00.1: cvl_0_1 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.335 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:11:13.336 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:13.336 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:11:13.336 altname enp175s0f0np0 00:11:13.336 altname ens801f0np0 00:11:13.336 inet 192.168.100.8/24 scope global cvl_0_0 00:11:13.336 valid_lft forever preferred_lft forever 00:11:13.336 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:11:13.336 valid_lft forever preferred_lft forever 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:13.336 00:55:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:11:13.336 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:13.336 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:11:13.336 altname enp175s0f1np1 00:11:13.336 altname ens801f1np1 00:11:13.336 inet 192.168.100.9/24 scope global cvl_0_1 00:11:13.336 valid_lft forever preferred_lft forever 00:11:13.336 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:11:13.336 valid_lft forever preferred_lft forever 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:13.336 192.168.100.9' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:13.336 192.168.100.9' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:13.336 192.168.100.9' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=250758 00:11:13.336 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:13.337 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:13.337 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 250758 00:11:13.337 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 250758 ']' 00:11:13.337 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.337 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.337 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.337 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.337 00:55:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.337 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.337 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:13.337 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:13.337 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:13.337 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:13.595 00:55:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:25.786 Initializing NVMe Controllers 00:11:25.786 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.786 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:25.786 Initialization complete. Launching workers. 00:11:25.786 ======================================================== 00:11:25.786 Latency(us) 00:11:25.786 Device Information : IOPS MiB/s Average min max 00:11:25.786 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 21021.90 82.12 3044.30 756.65 14083.98 00:11:25.786 ======================================================== 00:11:25.786 Total : 21021.90 82.12 3044.30 756.65 14083.98 00:11:25.786 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:25.786 rmmod nvme_rdma 00:11:25.786 rmmod nvme_fabrics 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 250758 ']' 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 250758 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 250758 ']' 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 250758 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 250758 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 250758' 00:11:25.786 killing process with pid 250758 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 250758 00:11:25.786 00:55:31 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 250758 00:11:26.354 nvmf threads initialize successfully 00:11:26.354 bdev subsystem init successfully 00:11:26.354 created a nvmf target service 00:11:26.354 create targets's poll groups done 00:11:26.354 all subsystems of target started 00:11:26.354 nvmf target is running 00:11:26.354 all subsystems of target stopped 00:11:26.354 destroy targets's poll groups done 00:11:26.354 destroyed the nvmf target service 00:11:26.354 bdev subsystem finish successfully 00:11:26.354 nvmf threads destroy successfully 00:11:26.354 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.354 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:26.354 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:26.354 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.354 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.614 00:11:26.614 real 0m19.981s 00:11:26.614 user 0m55.782s 00:11:26.614 sys 0m4.870s 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.614 ************************************ 00:11:26.614 END TEST nvmf_example 00:11:26.614 ************************************ 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.614 ************************************ 00:11:26.614 START TEST nvmf_filesystem 00:11:26.614 ************************************ 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:26.614 * Looking for test storage... 00:11:26.614 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:26.614 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:26.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.615 --rc genhtml_branch_coverage=1 00:11:26.615 --rc genhtml_function_coverage=1 00:11:26.615 --rc genhtml_legend=1 00:11:26.615 --rc geninfo_all_blocks=1 00:11:26.615 --rc geninfo_unexecuted_blocks=1 00:11:26.615 00:11:26.615 ' 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:26.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.615 --rc genhtml_branch_coverage=1 00:11:26.615 --rc genhtml_function_coverage=1 00:11:26.615 --rc genhtml_legend=1 00:11:26.615 --rc geninfo_all_blocks=1 00:11:26.615 --rc geninfo_unexecuted_blocks=1 00:11:26.615 00:11:26.615 ' 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:26.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.615 --rc genhtml_branch_coverage=1 00:11:26.615 --rc genhtml_function_coverage=1 00:11:26.615 --rc genhtml_legend=1 00:11:26.615 --rc geninfo_all_blocks=1 00:11:26.615 --rc geninfo_unexecuted_blocks=1 00:11:26.615 00:11:26.615 ' 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:26.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.615 --rc genhtml_branch_coverage=1 00:11:26.615 --rc genhtml_function_coverage=1 00:11:26.615 --rc genhtml_legend=1 00:11:26.615 --rc geninfo_all_blocks=1 00:11:26.615 --rc geninfo_unexecuted_blocks=1 00:11:26.615 00:11:26.615 ' 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output ']' 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:26.615 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:26.878 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/config.h ]] 00:11:26.879 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:26.879 #define SPDK_CONFIG_H 00:11:26.879 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:26.879 #define SPDK_CONFIG_APPS 1 00:11:26.879 #define SPDK_CONFIG_ARCH native 00:11:26.879 #define SPDK_CONFIG_ASAN 1 00:11:26.879 #undef SPDK_CONFIG_AVAHI 00:11:26.879 #undef SPDK_CONFIG_CET 00:11:26.879 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:26.879 #define SPDK_CONFIG_COVERAGE 1 00:11:26.879 #define SPDK_CONFIG_CROSS_PREFIX 00:11:26.879 #undef SPDK_CONFIG_CRYPTO 00:11:26.879 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:26.879 #undef SPDK_CONFIG_CUSTOMOCF 00:11:26.879 #undef SPDK_CONFIG_DAOS 00:11:26.879 #define SPDK_CONFIG_DAOS_DIR 00:11:26.879 #define SPDK_CONFIG_DEBUG 1 00:11:26.879 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:26.879 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:11:26.879 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:26.879 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:26.879 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:26.879 #undef SPDK_CONFIG_DPDK_UADK 00:11:26.879 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:11:26.879 #define SPDK_CONFIG_EXAMPLES 1 00:11:26.879 #undef SPDK_CONFIG_FC 00:11:26.879 #define SPDK_CONFIG_FC_PATH 00:11:26.879 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:26.879 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:26.879 #define SPDK_CONFIG_FSDEV 1 00:11:26.879 #undef SPDK_CONFIG_FUSE 00:11:26.879 #undef SPDK_CONFIG_FUZZER 00:11:26.879 #define SPDK_CONFIG_FUZZER_LIB 00:11:26.879 #undef SPDK_CONFIG_GOLANG 00:11:26.879 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:26.879 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:26.879 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:26.879 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:26.879 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:26.880 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:26.880 #undef SPDK_CONFIG_HAVE_LZ4 00:11:26.880 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:26.880 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:26.880 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:26.880 #define SPDK_CONFIG_IDXD 1 00:11:26.880 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:26.880 #undef SPDK_CONFIG_IPSEC_MB 00:11:26.880 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:26.880 #define SPDK_CONFIG_ISAL 1 00:11:26.880 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:26.880 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:26.880 #define SPDK_CONFIG_LIBDIR 00:11:26.880 #undef SPDK_CONFIG_LTO 00:11:26.880 #define SPDK_CONFIG_MAX_LCORES 128 00:11:26.880 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:26.880 #define SPDK_CONFIG_NVME_CUSE 1 00:11:26.880 #undef SPDK_CONFIG_OCF 00:11:26.880 #define SPDK_CONFIG_OCF_PATH 00:11:26.880 #define SPDK_CONFIG_OPENSSL_PATH 00:11:26.880 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:26.880 #define SPDK_CONFIG_PGO_DIR 00:11:26.880 #undef SPDK_CONFIG_PGO_USE 00:11:26.880 #define SPDK_CONFIG_PREFIX /usr/local 00:11:26.880 #undef SPDK_CONFIG_RAID5F 00:11:26.880 #undef SPDK_CONFIG_RBD 00:11:26.880 #define SPDK_CONFIG_RDMA 1 00:11:26.880 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:26.880 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:26.880 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:26.880 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:26.880 #define SPDK_CONFIG_SHARED 1 00:11:26.880 #undef SPDK_CONFIG_SMA 00:11:26.880 #define SPDK_CONFIG_TESTS 1 00:11:26.880 #undef SPDK_CONFIG_TSAN 00:11:26.880 #define SPDK_CONFIG_UBLK 1 00:11:26.880 #define SPDK_CONFIG_UBSAN 1 00:11:26.880 #undef SPDK_CONFIG_UNIT_TESTS 00:11:26.880 #undef SPDK_CONFIG_URING 00:11:26.880 #define SPDK_CONFIG_URING_PATH 00:11:26.880 #undef SPDK_CONFIG_URING_ZNS 00:11:26.880 #undef SPDK_CONFIG_USDT 00:11:26.880 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:26.880 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:26.880 #undef SPDK_CONFIG_VFIO_USER 00:11:26.880 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:26.880 #define SPDK_CONFIG_VHOST 1 00:11:26.880 #define SPDK_CONFIG_VIRTIO 1 00:11:26.880 #undef SPDK_CONFIG_VTUNE 00:11:26.880 #define SPDK_CONFIG_VTUNE_DIR 00:11:26.880 #define SPDK_CONFIG_WERROR 1 00:11:26.880 #define SPDK_CONFIG_WPDK_DIR 00:11:26.880 #undef SPDK_CONFIG_XNVME 00:11:26.880 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.run_test_name 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:26.880 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power ]] 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:26.881 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:11:26.882 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 253105 ]] 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 253105 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.tm5lCv 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target /tmp/spdk.tm5lCv/tests/target /tmp/spdk.tm5lCv 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=89636175872 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552401408 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5916225536 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47761403904 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776198656 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=14794752 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087429632 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110481920 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23052288 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:26.883 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47776026624 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=176128 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:26.884 * Looking for test storage... 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=89636175872 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8130818048 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:26.884 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.884 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:26.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.885 --rc genhtml_branch_coverage=1 00:11:26.885 --rc genhtml_function_coverage=1 00:11:26.885 --rc genhtml_legend=1 00:11:26.885 --rc geninfo_all_blocks=1 00:11:26.885 --rc geninfo_unexecuted_blocks=1 00:11:26.885 00:11:26.885 ' 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:26.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.885 --rc genhtml_branch_coverage=1 00:11:26.885 --rc genhtml_function_coverage=1 00:11:26.885 --rc genhtml_legend=1 00:11:26.885 --rc geninfo_all_blocks=1 00:11:26.885 --rc geninfo_unexecuted_blocks=1 00:11:26.885 00:11:26.885 ' 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:26.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.885 --rc genhtml_branch_coverage=1 00:11:26.885 --rc genhtml_function_coverage=1 00:11:26.885 --rc genhtml_legend=1 00:11:26.885 --rc geninfo_all_blocks=1 00:11:26.885 --rc geninfo_unexecuted_blocks=1 00:11:26.885 00:11:26.885 ' 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:26.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.885 --rc genhtml_branch_coverage=1 00:11:26.885 --rc genhtml_function_coverage=1 00:11:26.885 --rc genhtml_legend=1 00:11:26.885 --rc geninfo_all_blocks=1 00:11:26.885 --rc geninfo_unexecuted_blocks=1 00:11:26.885 00:11:26.885 ' 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.885 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.885 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.145 00:55:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.718 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:33.719 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:33.719 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@405 -- # modinfo irdma 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:33.719 Found net devices under 0000:af:00.0: cvl_0_0 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:33.719 Found net devices under 0000:af:00.1: cvl_0_1 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:11:33.719 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:33.719 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:11:33.719 altname enp175s0f0np0 00:11:33.719 altname ens801f0np0 00:11:33.719 inet 192.168.100.8/24 scope global cvl_0_0 00:11:33.719 valid_lft forever preferred_lft forever 00:11:33.719 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:11:33.719 valid_lft forever preferred_lft forever 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:33.719 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:11:33.720 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:33.720 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:11:33.720 altname enp175s0f1np1 00:11:33.720 altname ens801f1np1 00:11:33.720 inet 192.168.100.9/24 scope global cvl_0_1 00:11:33.720 valid_lft forever preferred_lft forever 00:11:33.720 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:11:33.720 valid_lft forever preferred_lft forever 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:33.720 192.168.100.9' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:33.720 192.168.100.9' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:33.720 192.168.100.9' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.720 ************************************ 00:11:33.720 START TEST nvmf_filesystem_no_in_capsule 00:11:33.720 ************************************ 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=256353 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 256353 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 256353 ']' 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.720 00:55:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.720 [2024-11-19 00:55:39.616592] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:33.720 [2024-11-19 00:55:39.616685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.720 [2024-11-19 00:55:39.744744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.720 [2024-11-19 00:55:39.853132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.720 [2024-11-19 00:55:39.853180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.720 [2024-11-19 00:55:39.853191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.720 [2024-11-19 00:55:39.853202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.720 [2024-11-19 00:55:39.853210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.720 [2024-11-19 00:55:39.855567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.720 [2024-11-19 00:55:39.855604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.720 [2024-11-19 00:55:39.855692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.720 [2024-11-19 00:55:39.855713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.979 [2024-11-19 00:55:40.465252] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:33.979 [2024-11-19 00:55:40.482326] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:11:33.979 [2024-11-19 00:55:40.491872] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:11:33.979 [2024-11-19 00:55:40.491904] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.979 00:55:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.544 Malloc1 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.545 [2024-11-19 00:55:41.087414] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:34.545 { 00:11:34.545 "name": "Malloc1", 00:11:34.545 "aliases": [ 00:11:34.545 "8bc79255-43ec-4592-81bd-9d8a82f86bb7" 00:11:34.545 ], 00:11:34.545 "product_name": "Malloc disk", 00:11:34.545 "block_size": 512, 00:11:34.545 "num_blocks": 1048576, 00:11:34.545 "uuid": "8bc79255-43ec-4592-81bd-9d8a82f86bb7", 00:11:34.545 "assigned_rate_limits": { 00:11:34.545 "rw_ios_per_sec": 0, 00:11:34.545 "rw_mbytes_per_sec": 0, 00:11:34.545 "r_mbytes_per_sec": 0, 00:11:34.545 "w_mbytes_per_sec": 0 00:11:34.545 }, 00:11:34.545 "claimed": true, 00:11:34.545 "claim_type": "exclusive_write", 00:11:34.545 "zoned": false, 00:11:34.545 "supported_io_types": { 00:11:34.545 "read": true, 00:11:34.545 "write": true, 00:11:34.545 "unmap": true, 00:11:34.545 "flush": true, 00:11:34.545 "reset": true, 00:11:34.545 "nvme_admin": false, 00:11:34.545 "nvme_io": false, 00:11:34.545 "nvme_io_md": false, 00:11:34.545 "write_zeroes": true, 00:11:34.545 "zcopy": true, 00:11:34.545 "get_zone_info": false, 00:11:34.545 "zone_management": false, 00:11:34.545 "zone_append": false, 00:11:34.545 "compare": false, 00:11:34.545 "compare_and_write": false, 00:11:34.545 "abort": true, 00:11:34.545 "seek_hole": false, 00:11:34.545 "seek_data": false, 00:11:34.545 "copy": true, 00:11:34.545 "nvme_iov_md": false 00:11:34.545 }, 00:11:34.545 "memory_domains": [ 00:11:34.545 { 00:11:34.545 "dma_device_id": "system", 00:11:34.545 "dma_device_type": 1 00:11:34.545 }, 00:11:34.545 { 00:11:34.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.545 "dma_device_type": 2 00:11:34.545 } 00:11:34.545 ], 00:11:34.545 "driver_specific": {} 00:11:34.545 } 00:11:34.545 ]' 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:34.545 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:34.803 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.803 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.803 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.803 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:34.803 00:55:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:37.328 00:55:43 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.261 ************************************ 00:11:38.261 START TEST filesystem_ext4 00:11:38.261 ************************************ 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:38.261 mke2fs 1.47.0 (5-Feb-2023) 00:11:38.261 Discarding device blocks: 0/522240 done 00:11:38.261 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:38.261 Filesystem UUID: 64714a82-0071-4edb-8135-dc9f94a35e69 00:11:38.261 Superblock backups stored on blocks: 00:11:38.261 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:38.261 00:11:38.261 Allocating group tables: 0/64 done 00:11:38.261 Writing inode tables: 0/64 done 00:11:38.261 Creating journal (8192 blocks): done 00:11:38.261 Writing superblocks and filesystem accounting information: 0/64 done 00:11:38.261 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 256353 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.261 00:11:38.261 real 0m0.251s 00:11:38.261 user 0m0.025s 00:11:38.261 sys 0m0.117s 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.261 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:38.261 ************************************ 00:11:38.261 END TEST filesystem_ext4 00:11:38.261 ************************************ 00:11:38.518 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:38.518 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.518 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.518 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.518 ************************************ 00:11:38.518 START TEST filesystem_btrfs 00:11:38.518 ************************************ 00:11:38.518 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:38.518 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:38.518 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.518 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:38.518 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:38.518 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:38.518 00:55:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:38.518 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:38.518 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:38.519 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:38.519 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:38.519 btrfs-progs v6.8.1 00:11:38.519 See https://btrfs.readthedocs.io for more information. 00:11:38.519 00:11:38.519 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:38.519 NOTE: several default settings have changed in version 5.15, please make sure 00:11:38.519 this does not affect your deployments: 00:11:38.519 - DUP for metadata (-m dup) 00:11:38.519 - enabled no-holes (-O no-holes) 00:11:38.519 - enabled free-space-tree (-R free-space-tree) 00:11:38.519 00:11:38.519 Label: (null) 00:11:38.519 UUID: 7caee040-0b47-471d-8319-80b9966f6363 00:11:38.519 Node size: 16384 00:11:38.519 Sector size: 4096 (CPU page size: 4096) 00:11:38.519 Filesystem size: 510.00MiB 00:11:38.519 Block group profiles: 00:11:38.519 Data: single 8.00MiB 00:11:38.519 Metadata: DUP 32.00MiB 00:11:38.519 System: DUP 8.00MiB 00:11:38.519 SSD detected: yes 00:11:38.519 Zoned device: no 00:11:38.519 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:38.519 Checksum: crc32c 00:11:38.519 Number of devices: 1 00:11:38.519 Devices: 00:11:38.519 ID SIZE PATH 00:11:38.519 1 510.00MiB /dev/nvme0n1p1 00:11:38.519 00:11:38.519 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:38.519 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 256353 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.776 00:11:38.776 real 0m0.288s 00:11:38.776 user 0m0.032s 00:11:38.776 sys 0m0.153s 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:38.776 ************************************ 00:11:38.776 END TEST filesystem_btrfs 00:11:38.776 ************************************ 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.776 ************************************ 00:11:38.776 START TEST filesystem_xfs 00:11:38.776 ************************************ 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:38.776 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:39.034 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:39.034 = sectsz=512 attr=2, projid32bit=1 00:11:39.034 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:39.034 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:39.034 data = bsize=4096 blocks=130560, imaxpct=25 00:11:39.034 = sunit=0 swidth=0 blks 00:11:39.034 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:39.034 log =internal log bsize=4096 blocks=16384, version=2 00:11:39.034 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:39.034 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:39.034 Discarding blocks...Done. 00:11:39.034 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:39.034 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.596 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.596 00:55:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 256353 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.596 00:11:39.596 real 0m0.690s 00:11:39.596 user 0m0.024s 00:11:39.596 sys 0m0.099s 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:39.596 ************************************ 00:11:39.596 END TEST filesystem_xfs 00:11:39.596 ************************************ 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:39.596 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.527 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.527 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:40.527 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:40.527 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.527 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:40.527 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.527 00:55:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 256353 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 256353 ']' 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 256353 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256353 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256353' 00:11:40.527 killing process with pid 256353 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 256353 00:11:40.527 00:55:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 256353 00:11:43.051 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:43.051 00:11:43.051 real 0m10.206s 00:11:43.051 user 0m38.580s 00:11:43.051 sys 0m1.378s 00:11:43.051 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.051 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.051 ************************************ 00:11:43.051 END TEST nvmf_filesystem_no_in_capsule 00:11:43.051 ************************************ 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.349 ************************************ 00:11:43.349 START TEST nvmf_filesystem_in_capsule 00:11:43.349 ************************************ 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=258167 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 258167 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 258167 ']' 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.349 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.350 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.350 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.350 00:55:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.350 [2024-11-19 00:55:49.886780] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:43.350 [2024-11-19 00:55:49.886885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.350 [2024-11-19 00:55:50.013803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.608 [2024-11-19 00:55:50.137112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.608 [2024-11-19 00:55:50.137164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.608 [2024-11-19 00:55:50.137176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.608 [2024-11-19 00:55:50.137186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.608 [2024-11-19 00:55:50.137195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.608 [2024-11-19 00:55:50.139649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.608 [2024-11-19 00:55:50.139730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.608 [2024-11-19 00:55:50.139796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.608 [2024-11-19 00:55:50.139817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.176 [2024-11-19 00:55:50.761455] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:11:44.176 [2024-11-19 00:55:50.771058] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:11:44.176 [2024-11-19 00:55:50.771088] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.176 00:55:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.743 Malloc1 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.743 [2024-11-19 00:55:51.383038] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.743 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:44.743 { 00:11:44.743 "name": "Malloc1", 00:11:44.743 "aliases": [ 00:11:44.743 "b2059685-35b5-4376-9430-00eb71179bbb" 00:11:44.743 ], 00:11:44.743 "product_name": "Malloc disk", 00:11:44.743 "block_size": 512, 00:11:44.743 "num_blocks": 1048576, 00:11:44.743 "uuid": "b2059685-35b5-4376-9430-00eb71179bbb", 00:11:44.743 "assigned_rate_limits": { 00:11:44.743 "rw_ios_per_sec": 0, 00:11:44.743 "rw_mbytes_per_sec": 0, 00:11:44.743 "r_mbytes_per_sec": 0, 00:11:44.743 "w_mbytes_per_sec": 0 00:11:44.743 }, 00:11:44.743 "claimed": true, 00:11:44.743 "claim_type": "exclusive_write", 00:11:44.743 "zoned": false, 00:11:44.743 "supported_io_types": { 00:11:44.743 "read": true, 00:11:44.743 "write": true, 00:11:44.743 "unmap": true, 00:11:44.743 "flush": true, 00:11:44.743 "reset": true, 00:11:44.743 "nvme_admin": false, 00:11:44.743 "nvme_io": false, 00:11:44.743 "nvme_io_md": false, 00:11:44.743 "write_zeroes": true, 00:11:44.743 "zcopy": true, 00:11:44.743 "get_zone_info": false, 00:11:44.743 "zone_management": false, 00:11:44.743 "zone_append": false, 00:11:44.743 "compare": false, 00:11:44.743 "compare_and_write": false, 00:11:44.743 "abort": true, 00:11:44.743 "seek_hole": false, 00:11:44.743 "seek_data": false, 00:11:44.743 "copy": true, 00:11:44.743 "nvme_iov_md": false 00:11:44.743 }, 00:11:44.743 "memory_domains": [ 00:11:44.744 { 00:11:44.744 "dma_device_id": "system", 00:11:44.744 "dma_device_type": 1 00:11:44.744 }, 00:11:44.744 { 00:11:44.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.744 "dma_device_type": 2 00:11:44.744 } 00:11:44.744 ], 00:11:44.744 "driver_specific": {} 00:11:44.744 } 00:11:44.744 ]' 00:11:44.744 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:45.002 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:45.002 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:45.002 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:45.002 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:45.002 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:45.002 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:45.002 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:45.260 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.260 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:45.260 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.260 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:45.260 00:55:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:47.159 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:47.416 00:55:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:48.347 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:48.347 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:48.347 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:48.347 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.347 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.347 ************************************ 00:11:48.347 START TEST filesystem_in_capsule_ext4 00:11:48.347 ************************************ 00:11:48.347 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:48.347 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:48.348 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.348 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:48.348 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:48.348 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:48.348 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:48.348 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:48.348 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:48.348 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:48.348 00:55:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:48.348 mke2fs 1.47.0 (5-Feb-2023) 00:11:48.348 Discarding device blocks: 0/522240 done 00:11:48.348 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:48.348 Filesystem UUID: 7c052569-6638-45c9-a0d5-58e2a750fd4c 00:11:48.348 Superblock backups stored on blocks: 00:11:48.348 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:48.348 00:11:48.348 Allocating group tables: 0/64 done 00:11:48.348 Writing inode tables: 0/64 done 00:11:48.348 Creating journal (8192 blocks): done 00:11:48.348 Writing superblocks and filesystem accounting information: 0/64 done 00:11:48.348 00:11:48.348 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:48.348 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.348 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 258167 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.606 00:11:48.606 real 0m0.196s 00:11:48.606 user 0m0.027s 00:11:48.606 sys 0m0.060s 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:48.606 ************************************ 00:11:48.606 END TEST filesystem_in_capsule_ext4 00:11:48.606 ************************************ 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.606 ************************************ 00:11:48.606 START TEST filesystem_in_capsule_btrfs 00:11:48.606 ************************************ 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:48.606 btrfs-progs v6.8.1 00:11:48.606 See https://btrfs.readthedocs.io for more information. 00:11:48.606 00:11:48.606 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:48.606 NOTE: several default settings have changed in version 5.15, please make sure 00:11:48.606 this does not affect your deployments: 00:11:48.606 - DUP for metadata (-m dup) 00:11:48.606 - enabled no-holes (-O no-holes) 00:11:48.606 - enabled free-space-tree (-R free-space-tree) 00:11:48.606 00:11:48.606 Label: (null) 00:11:48.606 UUID: 841327a3-7779-4bc4-9960-8a6b7a5daea4 00:11:48.606 Node size: 16384 00:11:48.606 Sector size: 4096 (CPU page size: 4096) 00:11:48.606 Filesystem size: 510.00MiB 00:11:48.606 Block group profiles: 00:11:48.606 Data: single 8.00MiB 00:11:48.606 Metadata: DUP 32.00MiB 00:11:48.606 System: DUP 8.00MiB 00:11:48.606 SSD detected: yes 00:11:48.606 Zoned device: no 00:11:48.606 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:48.606 Checksum: crc32c 00:11:48.606 Number of devices: 1 00:11:48.606 Devices: 00:11:48.606 ID SIZE PATH 00:11:48.606 1 510.00MiB /dev/nvme0n1p1 00:11:48.606 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:48.606 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 258167 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.865 00:11:48.865 real 0m0.232s 00:11:48.865 user 0m0.024s 00:11:48.865 sys 0m0.109s 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:48.865 ************************************ 00:11:48.865 END TEST filesystem_in_capsule_btrfs 00:11:48.865 ************************************ 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.865 ************************************ 00:11:48.865 START TEST filesystem_in_capsule_xfs 00:11:48.865 ************************************ 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:48.865 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:49.124 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:49.124 = sectsz=512 attr=2, projid32bit=1 00:11:49.124 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:49.124 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:49.124 data = bsize=4096 blocks=130560, imaxpct=25 00:11:49.124 = sunit=0 swidth=0 blks 00:11:49.124 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:49.124 log =internal log bsize=4096 blocks=16384, version=2 00:11:49.124 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:49.124 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:49.124 Discarding blocks...Done. 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 258167 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.124 00:11:49.124 real 0m0.200s 00:11:49.124 user 0m0.017s 00:11:49.124 sys 0m0.070s 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.124 ************************************ 00:11:49.124 END TEST filesystem_in_capsule_xfs 00:11:49.124 ************************************ 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:49.124 00:55:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 258167 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 258167 ']' 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 258167 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 258167 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.056 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.057 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 258167' 00:11:50.057 killing process with pid 258167 00:11:50.057 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 258167 00:11:50.057 00:55:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 258167 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:53.366 00:11:53.366 real 0m9.525s 00:11:53.366 user 0m35.788s 00:11:53.366 sys 0m1.206s 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.366 ************************************ 00:11:53.366 END TEST nvmf_filesystem_in_capsule 00:11:53.366 ************************************ 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:53.366 rmmod nvme_rdma 00:11:53.366 rmmod nvme_fabrics 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:53.366 00:11:53.366 real 0m26.295s 00:11:53.366 user 1m16.485s 00:11:53.366 sys 0m7.182s 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.366 ************************************ 00:11:53.366 END TEST nvmf_filesystem 00:11:53.366 ************************************ 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:53.366 ************************************ 00:11:53.366 START TEST nvmf_target_discovery 00:11:53.366 ************************************ 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:53.366 * Looking for test storage... 00:11:53.366 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.366 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:53.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.367 --rc genhtml_branch_coverage=1 00:11:53.367 --rc genhtml_function_coverage=1 00:11:53.367 --rc genhtml_legend=1 00:11:53.367 --rc geninfo_all_blocks=1 00:11:53.367 --rc geninfo_unexecuted_blocks=1 00:11:53.367 00:11:53.367 ' 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:53.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.367 --rc genhtml_branch_coverage=1 00:11:53.367 --rc genhtml_function_coverage=1 00:11:53.367 --rc genhtml_legend=1 00:11:53.367 --rc geninfo_all_blocks=1 00:11:53.367 --rc geninfo_unexecuted_blocks=1 00:11:53.367 00:11:53.367 ' 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:53.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.367 --rc genhtml_branch_coverage=1 00:11:53.367 --rc genhtml_function_coverage=1 00:11:53.367 --rc genhtml_legend=1 00:11:53.367 --rc geninfo_all_blocks=1 00:11:53.367 --rc geninfo_unexecuted_blocks=1 00:11:53.367 00:11:53.367 ' 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:53.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.367 --rc genhtml_branch_coverage=1 00:11:53.367 --rc genhtml_function_coverage=1 00:11:53.367 --rc genhtml_legend=1 00:11:53.367 --rc geninfo_all_blocks=1 00:11:53.367 --rc geninfo_unexecuted_blocks=1 00:11:53.367 00:11:53.367 ' 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.367 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.367 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:53.368 00:55:59 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:58.645 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:58.646 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:58.646 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@405 -- # modinfo irdma 00:11:58.646 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:58.906 Found net devices under 0000:af:00.0: cvl_0_0 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:58.906 Found net devices under 0000:af:00.1: cvl_0_1 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:11:58.906 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:58.906 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:11:58.906 altname enp175s0f0np0 00:11:58.906 altname ens801f0np0 00:11:58.906 inet 192.168.100.8/24 scope global cvl_0_0 00:11:58.906 valid_lft forever preferred_lft forever 00:11:58.906 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:11:58.906 valid_lft forever preferred_lft forever 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:11:58.906 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:58.906 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:11:58.906 altname enp175s0f1np1 00:11:58.906 altname ens801f1np1 00:11:58.906 inet 192.168.100.9/24 scope global cvl_0_1 00:11:58.906 valid_lft forever preferred_lft forever 00:11:58.906 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:11:58.906 valid_lft forever preferred_lft forever 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:58.906 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo cvl_0_0 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo cvl_0_1 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:58.907 192.168.100.9' 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:58.907 192.168.100.9' 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:58.907 192.168.100.9' 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=263006 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 263006 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 263006 ']' 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.907 00:56:05 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.166 [2024-11-19 00:56:05.634437] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:59.166 [2024-11-19 00:56:05.634527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.166 [2024-11-19 00:56:05.761202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.423 [2024-11-19 00:56:05.871682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.423 [2024-11-19 00:56:05.871725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.423 [2024-11-19 00:56:05.871737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.423 [2024-11-19 00:56:05.871747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.423 [2024-11-19 00:56:05.871755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.423 [2024-11-19 00:56:05.874141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.424 [2024-11-19 00:56:05.874232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.424 [2024-11-19 00:56:05.874304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.424 [2024-11-19 00:56:05.874345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.989 [2024-11-19 00:56:06.504368] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:11:59.989 [2024-11-19 00:56:06.513893] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:11:59.989 [2024-11-19 00:56:06.513922] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.989 Null1 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.989 [2024-11-19 00:56:06.566401] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.989 Null2 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.989 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 Null3 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 Null4 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.990 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.248 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.248 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:12:00.248 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.248 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.248 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.248 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:12:00.248 00:12:00.248 Discovery Log Number of Records 6, Generation counter 6 00:12:00.248 =====Discovery Log Entry 0====== 00:12:00.248 trtype: rdma 00:12:00.248 adrfam: ipv4 00:12:00.248 subtype: current discovery subsystem 00:12:00.248 treq: not required 00:12:00.248 portid: 0 00:12:00.248 trsvcid: 4420 00:12:00.248 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:00.248 traddr: 192.168.100.8 00:12:00.248 eflags: explicit discovery connections, duplicate discovery information 00:12:00.248 rdma_prtype: not specified 00:12:00.248 rdma_qptype: connected 00:12:00.248 rdma_cms: rdma-cm 00:12:00.248 rdma_pkey: 0x0000 00:12:00.248 =====Discovery Log Entry 1====== 00:12:00.248 trtype: rdma 00:12:00.248 adrfam: ipv4 00:12:00.248 subtype: nvme subsystem 00:12:00.248 treq: not required 00:12:00.248 portid: 0 00:12:00.248 trsvcid: 4420 00:12:00.248 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:00.248 traddr: 192.168.100.8 00:12:00.248 eflags: none 00:12:00.249 rdma_prtype: not specified 00:12:00.249 rdma_qptype: connected 00:12:00.249 rdma_cms: rdma-cm 00:12:00.249 rdma_pkey: 0x0000 00:12:00.249 =====Discovery Log Entry 2====== 00:12:00.249 trtype: rdma 00:12:00.249 adrfam: ipv4 00:12:00.249 subtype: nvme subsystem 00:12:00.249 treq: not required 00:12:00.249 portid: 0 00:12:00.249 trsvcid: 4420 00:12:00.249 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:00.249 traddr: 192.168.100.8 00:12:00.249 eflags: none 00:12:00.249 rdma_prtype: not specified 00:12:00.249 rdma_qptype: connected 00:12:00.249 rdma_cms: rdma-cm 00:12:00.249 rdma_pkey: 0x0000 00:12:00.249 =====Discovery Log Entry 3====== 00:12:00.249 trtype: rdma 00:12:00.249 adrfam: ipv4 00:12:00.249 subtype: nvme subsystem 00:12:00.249 treq: not required 00:12:00.249 portid: 0 00:12:00.249 trsvcid: 4420 00:12:00.249 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:00.249 traddr: 192.168.100.8 00:12:00.249 eflags: none 00:12:00.249 rdma_prtype: not specified 00:12:00.249 rdma_qptype: connected 00:12:00.249 rdma_cms: rdma-cm 00:12:00.249 rdma_pkey: 0x0000 00:12:00.249 =====Discovery Log Entry 4====== 00:12:00.249 trtype: rdma 00:12:00.249 adrfam: ipv4 00:12:00.249 subtype: nvme subsystem 00:12:00.249 treq: not required 00:12:00.249 portid: 0 00:12:00.249 trsvcid: 4420 00:12:00.249 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:00.249 traddr: 192.168.100.8 00:12:00.249 eflags: none 00:12:00.249 rdma_prtype: not specified 00:12:00.249 rdma_qptype: connected 00:12:00.249 rdma_cms: rdma-cm 00:12:00.249 rdma_pkey: 0x0000 00:12:00.249 =====Discovery Log Entry 5====== 00:12:00.249 trtype: rdma 00:12:00.249 adrfam: ipv4 00:12:00.249 subtype: discovery subsystem referral 00:12:00.249 treq: not required 00:12:00.249 portid: 0 00:12:00.249 trsvcid: 4430 00:12:00.249 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:00.249 traddr: 192.168.100.8 00:12:00.249 eflags: none 00:12:00.249 rdma_prtype: unrecognized 00:12:00.249 rdma_qptype: unrecognized 00:12:00.249 rdma_cms: unrecognized 00:12:00.249 rdma_pkey: 0x0000 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:00.249 Perform nvmf subsystem discovery via RPC 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.249 [ 00:12:00.249 { 00:12:00.249 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:00.249 "subtype": "Discovery", 00:12:00.249 "listen_addresses": [ 00:12:00.249 { 00:12:00.249 "trtype": "RDMA", 00:12:00.249 "adrfam": "IPv4", 00:12:00.249 "traddr": "192.168.100.8", 00:12:00.249 "trsvcid": "4420" 00:12:00.249 } 00:12:00.249 ], 00:12:00.249 "allow_any_host": true, 00:12:00.249 "hosts": [] 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:00.249 "subtype": "NVMe", 00:12:00.249 "listen_addresses": [ 00:12:00.249 { 00:12:00.249 "trtype": "RDMA", 00:12:00.249 "adrfam": "IPv4", 00:12:00.249 "traddr": "192.168.100.8", 00:12:00.249 "trsvcid": "4420" 00:12:00.249 } 00:12:00.249 ], 00:12:00.249 "allow_any_host": true, 00:12:00.249 "hosts": [], 00:12:00.249 "serial_number": "SPDK00000000000001", 00:12:00.249 "model_number": "SPDK bdev Controller", 00:12:00.249 "max_namespaces": 32, 00:12:00.249 "min_cntlid": 1, 00:12:00.249 "max_cntlid": 65519, 00:12:00.249 "namespaces": [ 00:12:00.249 { 00:12:00.249 "nsid": 1, 00:12:00.249 "bdev_name": "Null1", 00:12:00.249 "name": "Null1", 00:12:00.249 "nguid": "5B3BDEDDD1454165928A04CF38968C42", 00:12:00.249 "uuid": "5b3bdedd-d145-4165-928a-04cf38968c42" 00:12:00.249 } 00:12:00.249 ] 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:00.249 "subtype": "NVMe", 00:12:00.249 "listen_addresses": [ 00:12:00.249 { 00:12:00.249 "trtype": "RDMA", 00:12:00.249 "adrfam": "IPv4", 00:12:00.249 "traddr": "192.168.100.8", 00:12:00.249 "trsvcid": "4420" 00:12:00.249 } 00:12:00.249 ], 00:12:00.249 "allow_any_host": true, 00:12:00.249 "hosts": [], 00:12:00.249 "serial_number": "SPDK00000000000002", 00:12:00.249 "model_number": "SPDK bdev Controller", 00:12:00.249 "max_namespaces": 32, 00:12:00.249 "min_cntlid": 1, 00:12:00.249 "max_cntlid": 65519, 00:12:00.249 "namespaces": [ 00:12:00.249 { 00:12:00.249 "nsid": 1, 00:12:00.249 "bdev_name": "Null2", 00:12:00.249 "name": "Null2", 00:12:00.249 "nguid": "0CAB7F09F4CB458EB31016309FC12016", 00:12:00.249 "uuid": "0cab7f09-f4cb-458e-b310-16309fc12016" 00:12:00.249 } 00:12:00.249 ] 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:00.249 "subtype": "NVMe", 00:12:00.249 "listen_addresses": [ 00:12:00.249 { 00:12:00.249 "trtype": "RDMA", 00:12:00.249 "adrfam": "IPv4", 00:12:00.249 "traddr": "192.168.100.8", 00:12:00.249 "trsvcid": "4420" 00:12:00.249 } 00:12:00.249 ], 00:12:00.249 "allow_any_host": true, 00:12:00.249 "hosts": [], 00:12:00.249 "serial_number": "SPDK00000000000003", 00:12:00.249 "model_number": "SPDK bdev Controller", 00:12:00.249 "max_namespaces": 32, 00:12:00.249 "min_cntlid": 1, 00:12:00.249 "max_cntlid": 65519, 00:12:00.249 "namespaces": [ 00:12:00.249 { 00:12:00.249 "nsid": 1, 00:12:00.249 "bdev_name": "Null3", 00:12:00.249 "name": "Null3", 00:12:00.249 "nguid": "58272A3B9FE74AEB8F831ED0907B8FB8", 00:12:00.249 "uuid": "58272a3b-9fe7-4aeb-8f83-1ed0907b8fb8" 00:12:00.249 } 00:12:00.249 ] 00:12:00.249 }, 00:12:00.249 { 00:12:00.249 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:00.249 "subtype": "NVMe", 00:12:00.249 "listen_addresses": [ 00:12:00.249 { 00:12:00.249 "trtype": "RDMA", 00:12:00.249 "adrfam": "IPv4", 00:12:00.249 "traddr": "192.168.100.8", 00:12:00.249 "trsvcid": "4420" 00:12:00.249 } 00:12:00.249 ], 00:12:00.249 "allow_any_host": true, 00:12:00.249 "hosts": [], 00:12:00.249 "serial_number": "SPDK00000000000004", 00:12:00.249 "model_number": "SPDK bdev Controller", 00:12:00.249 "max_namespaces": 32, 00:12:00.249 "min_cntlid": 1, 00:12:00.249 "max_cntlid": 65519, 00:12:00.249 "namespaces": [ 00:12:00.249 { 00:12:00.249 "nsid": 1, 00:12:00.249 "bdev_name": "Null4", 00:12:00.249 "name": "Null4", 00:12:00.249 "nguid": "50F3B226601A4D108C29172C9CBD1D0A", 00:12:00.249 "uuid": "50f3b226-601a-4d10-8c29-172c9cbd1d0a" 00:12:00.249 } 00:12:00.249 ] 00:12:00.249 } 00:12:00.249 ] 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.249 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.250 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.507 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:00.507 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:00.507 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:00.507 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:00.507 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.507 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:00.507 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:00.507 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:00.508 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:00.508 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.508 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:00.508 rmmod nvme_rdma 00:12:00.508 rmmod nvme_fabrics 00:12:00.508 00:56:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 263006 ']' 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 263006 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 263006 ']' 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 263006 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263006 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263006' 00:12:00.508 killing process with pid 263006 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 263006 00:12:00.508 00:56:07 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 263006 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:01.883 00:12:01.883 real 0m8.759s 00:12:01.883 user 0m10.562s 00:12:01.883 sys 0m4.848s 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.883 ************************************ 00:12:01.883 END TEST nvmf_target_discovery 00:12:01.883 ************************************ 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.883 ************************************ 00:12:01.883 START TEST nvmf_referrals 00:12:01.883 ************************************ 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:01.883 * Looking for test storage... 00:12:01.883 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.883 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:01.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.884 --rc genhtml_branch_coverage=1 00:12:01.884 --rc genhtml_function_coverage=1 00:12:01.884 --rc genhtml_legend=1 00:12:01.884 --rc geninfo_all_blocks=1 00:12:01.884 --rc geninfo_unexecuted_blocks=1 00:12:01.884 00:12:01.884 ' 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:01.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.884 --rc genhtml_branch_coverage=1 00:12:01.884 --rc genhtml_function_coverage=1 00:12:01.884 --rc genhtml_legend=1 00:12:01.884 --rc geninfo_all_blocks=1 00:12:01.884 --rc geninfo_unexecuted_blocks=1 00:12:01.884 00:12:01.884 ' 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:01.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.884 --rc genhtml_branch_coverage=1 00:12:01.884 --rc genhtml_function_coverage=1 00:12:01.884 --rc genhtml_legend=1 00:12:01.884 --rc geninfo_all_blocks=1 00:12:01.884 --rc geninfo_unexecuted_blocks=1 00:12:01.884 00:12:01.884 ' 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:01.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.884 --rc genhtml_branch_coverage=1 00:12:01.884 --rc genhtml_function_coverage=1 00:12:01.884 --rc genhtml_legend=1 00:12:01.884 --rc geninfo_all_blocks=1 00:12:01.884 --rc geninfo_unexecuted_blocks=1 00:12:01.884 00:12:01.884 ' 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.884 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.885 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.885 00:56:08 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:08.458 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.458 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:08.459 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@405 -- # modinfo irdma 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:08.459 Found net devices under 0000:af:00.0: cvl_0_0 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:08.459 Found net devices under 0000:af:00.1: cvl_0_1 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:08.459 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:12:08.459 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:08.459 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:12:08.459 altname enp175s0f0np0 00:12:08.459 altname ens801f0np0 00:12:08.459 inet 192.168.100.8/24 scope global cvl_0_0 00:12:08.459 valid_lft forever preferred_lft forever 00:12:08.459 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:12:08.460 valid_lft forever preferred_lft forever 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:12:08.460 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:08.460 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:12:08.460 altname enp175s0f1np1 00:12:08.460 altname ens801f1np1 00:12:08.460 inet 192.168.100.9/24 scope global cvl_0_1 00:12:08.460 valid_lft forever preferred_lft forever 00:12:08.460 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:12:08.460 valid_lft forever preferred_lft forever 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:08.460 192.168.100.9' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:08.460 192.168.100.9' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:08.460 192.168.100.9' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=266742 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 266742 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 266742 ']' 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.460 00:56:14 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.460 [2024-11-19 00:56:14.470014] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:08.461 [2024-11-19 00:56:14.470107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.461 [2024-11-19 00:56:14.593846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.461 [2024-11-19 00:56:14.701934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.461 [2024-11-19 00:56:14.701981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.461 [2024-11-19 00:56:14.701991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.461 [2024-11-19 00:56:14.702000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.461 [2024-11-19 00:56:14.702007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.461 [2024-11-19 00:56:14.704378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.461 [2024-11-19 00:56:14.704436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.461 [2024-11-19 00:56:14.704502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.461 [2024-11-19 00:56:14.704523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.719 [2024-11-19 00:56:15.338248] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:12:08.719 [2024-11-19 00:56:15.347679] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:12:08.719 [2024-11-19 00:56:15.347707] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.719 [2024-11-19 00:56:15.360037] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.719 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:12:08.720 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.720 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.720 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.720 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:08.720 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:08.720 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.720 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.720 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.978 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:09.237 00:56:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:09.495 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:09.753 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:10.012 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:10.270 rmmod nvme_rdma 00:12:10.270 rmmod nvme_fabrics 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 266742 ']' 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 266742 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 266742 ']' 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 266742 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.270 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 266742 00:12:10.528 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.528 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.528 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 266742' 00:12:10.528 killing process with pid 266742 00:12:10.528 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 266742 00:12:10.528 00:56:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 266742 00:12:11.464 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.464 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:11.464 00:12:11.464 real 0m9.836s 00:12:11.464 user 0m15.585s 00:12:11.464 sys 0m5.223s 00:12:11.464 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.464 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.464 ************************************ 00:12:11.464 END TEST nvmf_referrals 00:12:11.464 ************************************ 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.723 ************************************ 00:12:11.723 START TEST nvmf_connect_disconnect 00:12:11.723 ************************************ 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:11.723 * Looking for test storage... 00:12:11.723 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.723 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:11.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.724 --rc genhtml_branch_coverage=1 00:12:11.724 --rc genhtml_function_coverage=1 00:12:11.724 --rc genhtml_legend=1 00:12:11.724 --rc geninfo_all_blocks=1 00:12:11.724 --rc geninfo_unexecuted_blocks=1 00:12:11.724 00:12:11.724 ' 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:11.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.724 --rc genhtml_branch_coverage=1 00:12:11.724 --rc genhtml_function_coverage=1 00:12:11.724 --rc genhtml_legend=1 00:12:11.724 --rc geninfo_all_blocks=1 00:12:11.724 --rc geninfo_unexecuted_blocks=1 00:12:11.724 00:12:11.724 ' 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:11.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.724 --rc genhtml_branch_coverage=1 00:12:11.724 --rc genhtml_function_coverage=1 00:12:11.724 --rc genhtml_legend=1 00:12:11.724 --rc geninfo_all_blocks=1 00:12:11.724 --rc geninfo_unexecuted_blocks=1 00:12:11.724 00:12:11.724 ' 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:11.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.724 --rc genhtml_branch_coverage=1 00:12:11.724 --rc genhtml_function_coverage=1 00:12:11.724 --rc genhtml_legend=1 00:12:11.724 --rc geninfo_all_blocks=1 00:12:11.724 --rc geninfo_unexecuted_blocks=1 00:12:11.724 00:12:11.724 ' 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.724 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.984 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:11.984 00:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.557 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.557 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.557 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.557 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.557 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.557 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.557 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:18.558 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:18.558 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@405 -- # modinfo irdma 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:18.558 Found net devices under 0000:af:00.0: cvl_0_0 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:18.558 Found net devices under 0000:af:00.1: cvl_0_1 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.558 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:12:18.559 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:18.559 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:12:18.559 altname enp175s0f0np0 00:12:18.559 altname ens801f0np0 00:12:18.559 inet 192.168.100.8/24 scope global cvl_0_0 00:12:18.559 valid_lft forever preferred_lft forever 00:12:18.559 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:12:18.559 valid_lft forever preferred_lft forever 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:12:18.559 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:18.559 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:12:18.559 altname enp175s0f1np1 00:12:18.559 altname ens801f1np1 00:12:18.559 inet 192.168.100.9/24 scope global cvl_0_1 00:12:18.559 valid_lft forever preferred_lft forever 00:12:18.559 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:12:18.559 valid_lft forever preferred_lft forever 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_0 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:18.559 192.168.100.9' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:18.559 192.168.100.9' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:18.559 192.168.100.9' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.559 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=270555 00:12:18.560 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 270555 00:12:18.560 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.560 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 270555 ']' 00:12:18.560 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.560 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.560 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.560 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.560 00:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 [2024-11-19 00:56:24.401247] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:18.560 [2024-11-19 00:56:24.401365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.560 [2024-11-19 00:56:24.524639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.560 [2024-11-19 00:56:24.632679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.560 [2024-11-19 00:56:24.632729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.560 [2024-11-19 00:56:24.632739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.560 [2024-11-19 00:56:24.632749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.560 [2024-11-19 00:56:24.632756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.560 [2024-11-19 00:56:24.635022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.560 [2024-11-19 00:56:24.635113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.560 [2024-11-19 00:56:24.635182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.560 [2024-11-19 00:56:24.635204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.560 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.560 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:18.560 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:18.560 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:18.560 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.560 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:18.560 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.560 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 [2024-11-19 00:56:25.247838] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:18.819 [2024-11-19 00:56:25.264895] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:12:18.819 [2024-11-19 00:56:25.274376] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:12:18.819 [2024-11-19 00:56:25.274407] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:12:18.819 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.819 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:18.819 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.820 [2024-11-19 00:56:25.407161] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:18.820 00:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:22.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:50.399 rmmod nvme_rdma 00:16:50.399 rmmod nvme_fabrics 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 270555 ']' 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 270555 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 270555 ']' 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 270555 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 270555 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 270555' 00:16:50.399 killing process with pid 270555 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 270555 00:16:50.399 01:00:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 270555 00:16:51.780 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:51.780 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:51.780 00:16:51.780 real 4m40.125s 00:16:51.780 user 18m11.598s 00:16:51.780 sys 0m17.721s 00:16:51.780 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.780 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:51.780 ************************************ 00:16:51.780 END TEST nvmf_connect_disconnect 00:16:51.780 ************************************ 00:16:51.780 01:00:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:16:51.780 01:00:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:51.780 01:00:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.780 01:00:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.780 ************************************ 00:16:51.780 START TEST nvmf_multitarget 00:16:51.780 ************************************ 00:16:51.780 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:16:52.041 * Looking for test storage... 00:16:52.041 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:52.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.041 --rc genhtml_branch_coverage=1 00:16:52.041 --rc genhtml_function_coverage=1 00:16:52.041 --rc genhtml_legend=1 00:16:52.041 --rc geninfo_all_blocks=1 00:16:52.041 --rc geninfo_unexecuted_blocks=1 00:16:52.041 00:16:52.041 ' 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:52.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.041 --rc genhtml_branch_coverage=1 00:16:52.041 --rc genhtml_function_coverage=1 00:16:52.041 --rc genhtml_legend=1 00:16:52.041 --rc geninfo_all_blocks=1 00:16:52.041 --rc geninfo_unexecuted_blocks=1 00:16:52.041 00:16:52.041 ' 00:16:52.041 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:52.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.041 --rc genhtml_branch_coverage=1 00:16:52.041 --rc genhtml_function_coverage=1 00:16:52.041 --rc genhtml_legend=1 00:16:52.041 --rc geninfo_all_blocks=1 00:16:52.041 --rc geninfo_unexecuted_blocks=1 00:16:52.042 00:16:52.042 ' 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:52.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.042 --rc genhtml_branch_coverage=1 00:16:52.042 --rc genhtml_function_coverage=1 00:16:52.042 --rc genhtml_legend=1 00:16:52.042 --rc geninfo_all_blocks=1 00:16:52.042 --rc geninfo_unexecuted_blocks=1 00:16:52.042 00:16:52.042 ' 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:52.042 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:52.042 01:00:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.619 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:58.620 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:58.620 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@405 -- # modinfo irdma 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:58.620 Found net devices under 0000:af:00.0: cvl_0_0 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:58.620 Found net devices under 0000:af:00.1: cvl_0_1 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo cvl_0_0 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo cvl_0_1 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:58.620 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:16:58.620 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:58.620 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:16:58.620 altname enp175s0f0np0 00:16:58.620 altname ens801f0np0 00:16:58.620 inet 192.168.100.8/24 scope global cvl_0_0 00:16:58.620 valid_lft forever preferred_lft forever 00:16:58.620 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:16:58.621 valid_lft forever preferred_lft forever 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:16:58.621 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:58.621 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:16:58.621 altname enp175s0f1np1 00:16:58.621 altname ens801f1np1 00:16:58.621 inet 192.168.100.9/24 scope global cvl_0_1 00:16:58.621 valid_lft forever preferred_lft forever 00:16:58.621 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:16:58.621 valid_lft forever preferred_lft forever 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo cvl_0_0 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo cvl_0_1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:58.621 192.168.100.9' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:58.621 192.168.100.9' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:58.621 192.168.100.9' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=320628 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 320628 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 320628 ']' 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.621 01:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:58.621 [2024-11-19 01:01:04.580171] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:58.621 [2024-11-19 01:01:04.580272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.621 [2024-11-19 01:01:04.712536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.621 [2024-11-19 01:01:04.819076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.621 [2024-11-19 01:01:04.819120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.621 [2024-11-19 01:01:04.819130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.621 [2024-11-19 01:01:04.819140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.621 [2024-11-19 01:01:04.819148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.621 [2024-11-19 01:01:04.821474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.621 [2024-11-19 01:01:04.821555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.621 [2024-11-19 01:01:04.821620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.621 [2024-11-19 01:01:04.821642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.880 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.880 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:58.880 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:58.880 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.880 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:58.880 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.880 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:58.880 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:58.880 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:58.880 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:58.880 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:59.138 "nvmf_tgt_1" 00:16:59.138 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:59.138 "nvmf_tgt_2" 00:16:59.138 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:59.138 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:59.396 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:59.396 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:59.396 true 00:16:59.396 01:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:59.396 true 00:16:59.396 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:59.396 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:59.654 rmmod nvme_rdma 00:16:59.654 rmmod nvme_fabrics 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 320628 ']' 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 320628 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 320628 ']' 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 320628 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 320628 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 320628' 00:16:59.654 killing process with pid 320628 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 320628 00:16:59.654 01:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 320628 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:01.029 00:17:01.029 real 0m9.002s 00:17:01.029 user 0m12.491s 00:17:01.029 sys 0m4.967s 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:01.029 ************************************ 00:17:01.029 END TEST nvmf_multitarget 00:17:01.029 ************************************ 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.029 ************************************ 00:17:01.029 START TEST nvmf_rpc 00:17:01.029 ************************************ 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:01.029 * Looking for test storage... 00:17:01.029 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:01.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.029 --rc genhtml_branch_coverage=1 00:17:01.029 --rc genhtml_function_coverage=1 00:17:01.029 --rc genhtml_legend=1 00:17:01.029 --rc geninfo_all_blocks=1 00:17:01.029 --rc geninfo_unexecuted_blocks=1 00:17:01.029 00:17:01.029 ' 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:01.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.029 --rc genhtml_branch_coverage=1 00:17:01.029 --rc genhtml_function_coverage=1 00:17:01.029 --rc genhtml_legend=1 00:17:01.029 --rc geninfo_all_blocks=1 00:17:01.029 --rc geninfo_unexecuted_blocks=1 00:17:01.029 00:17:01.029 ' 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:01.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.029 --rc genhtml_branch_coverage=1 00:17:01.029 --rc genhtml_function_coverage=1 00:17:01.029 --rc genhtml_legend=1 00:17:01.029 --rc geninfo_all_blocks=1 00:17:01.029 --rc geninfo_unexecuted_blocks=1 00:17:01.029 00:17:01.029 ' 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:01.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.029 --rc genhtml_branch_coverage=1 00:17:01.029 --rc genhtml_function_coverage=1 00:17:01.029 --rc genhtml_legend=1 00:17:01.029 --rc geninfo_all_blocks=1 00:17:01.029 --rc geninfo_unexecuted_blocks=1 00:17:01.029 00:17:01.029 ' 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.029 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.030 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:01.030 01:01:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:07.607 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:07.607 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@405 -- # modinfo irdma 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:07.607 Found net devices under 0000:af:00.0: cvl_0_0 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.607 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:07.607 Found net devices under 0000:af:00.1: cvl_0_1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo cvl_0_0 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo cvl_0_1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:17:07.608 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:07.608 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:07.608 altname enp175s0f0np0 00:17:07.608 altname ens801f0np0 00:17:07.608 inet 192.168.100.8/24 scope global cvl_0_0 00:17:07.608 valid_lft forever preferred_lft forever 00:17:07.608 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:07.608 valid_lft forever preferred_lft forever 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:17:07.608 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:07.608 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:07.608 altname enp175s0f1np1 00:17:07.608 altname ens801f1np1 00:17:07.608 inet 192.168.100.9/24 scope global cvl_0_1 00:17:07.608 valid_lft forever preferred_lft forever 00:17:07.608 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:07.608 valid_lft forever preferred_lft forever 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo cvl_0_0 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo cvl_0_1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:07.608 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:07.609 192.168.100.9' 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:07.609 192.168.100.9' 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:07.609 192.168.100.9' 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=324161 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 324161 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 324161 ']' 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.609 01:01:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.609 [2024-11-19 01:01:13.621842] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:07.609 [2024-11-19 01:01:13.621944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.609 [2024-11-19 01:01:13.733386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:07.609 [2024-11-19 01:01:13.837037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.609 [2024-11-19 01:01:13.837090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.609 [2024-11-19 01:01:13.837101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.609 [2024-11-19 01:01:13.837111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.609 [2024-11-19 01:01:13.837119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.609 [2024-11-19 01:01:13.839796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.609 [2024-11-19 01:01:13.839868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.609 [2024-11-19 01:01:13.839936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.609 [2024-11-19 01:01:13.839957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:07.868 "tick_rate": 2100000000, 00:17:07.868 "poll_groups": [ 00:17:07.868 { 00:17:07.868 "name": "nvmf_tgt_poll_group_000", 00:17:07.868 "admin_qpairs": 0, 00:17:07.868 "io_qpairs": 0, 00:17:07.868 "current_admin_qpairs": 0, 00:17:07.868 "current_io_qpairs": 0, 00:17:07.868 "pending_bdev_io": 0, 00:17:07.868 "completed_nvme_io": 0, 00:17:07.868 "transports": [] 00:17:07.868 }, 00:17:07.868 { 00:17:07.868 "name": "nvmf_tgt_poll_group_001", 00:17:07.868 "admin_qpairs": 0, 00:17:07.868 "io_qpairs": 0, 00:17:07.868 "current_admin_qpairs": 0, 00:17:07.868 "current_io_qpairs": 0, 00:17:07.868 "pending_bdev_io": 0, 00:17:07.868 "completed_nvme_io": 0, 00:17:07.868 "transports": [] 00:17:07.868 }, 00:17:07.868 { 00:17:07.868 "name": "nvmf_tgt_poll_group_002", 00:17:07.868 "admin_qpairs": 0, 00:17:07.868 "io_qpairs": 0, 00:17:07.868 "current_admin_qpairs": 0, 00:17:07.868 "current_io_qpairs": 0, 00:17:07.868 "pending_bdev_io": 0, 00:17:07.868 "completed_nvme_io": 0, 00:17:07.868 "transports": [] 00:17:07.868 }, 00:17:07.868 { 00:17:07.868 "name": "nvmf_tgt_poll_group_003", 00:17:07.868 "admin_qpairs": 0, 00:17:07.868 "io_qpairs": 0, 00:17:07.868 "current_admin_qpairs": 0, 00:17:07.868 "current_io_qpairs": 0, 00:17:07.868 "pending_bdev_io": 0, 00:17:07.868 "completed_nvme_io": 0, 00:17:07.868 "transports": [] 00:17:07.868 } 00:17:07.868 ] 00:17:07.868 }' 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:07.868 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:08.355 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:08.355 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:08.355 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.355 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.355 [2024-11-19 01:01:14.620102] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:17:08.355 [2024-11-19 01:01:14.630816] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:17:08.355 [2024-11-19 01:01:14.630865] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:17:08.355 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.355 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:08.355 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.355 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.355 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.355 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:08.355 "tick_rate": 2100000000, 00:17:08.355 "poll_groups": [ 00:17:08.355 { 00:17:08.355 "name": "nvmf_tgt_poll_group_000", 00:17:08.355 "admin_qpairs": 0, 00:17:08.355 "io_qpairs": 0, 00:17:08.355 "current_admin_qpairs": 0, 00:17:08.355 "current_io_qpairs": 0, 00:17:08.355 "pending_bdev_io": 0, 00:17:08.355 "completed_nvme_io": 0, 00:17:08.355 "transports": [ 00:17:08.355 { 00:17:08.355 "trtype": "RDMA", 00:17:08.355 "pending_data_buffer": 0, 00:17:08.355 "devices": [ 00:17:08.355 { 00:17:08.355 "name": "rocep175s0f0", 00:17:08.355 "polls": 1281, 00:17:08.355 "idle_polls": 1281, 00:17:08.355 "completions": 0, 00:17:08.355 "requests": 0, 00:17:08.355 "request_latency": 0, 00:17:08.355 "pending_free_request": 0, 00:17:08.355 "pending_rdma_read": 0, 00:17:08.355 "pending_rdma_write": 0, 00:17:08.355 "pending_rdma_send": 0, 00:17:08.355 "total_send_wrs": 0, 00:17:08.355 "send_doorbell_updates": 0, 00:17:08.355 "total_recv_wrs": 0, 00:17:08.355 "recv_doorbell_updates": 0 00:17:08.355 }, 00:17:08.355 { 00:17:08.355 "name": "rocep175s0f1", 00:17:08.355 "polls": 1281, 00:17:08.355 "idle_polls": 1281, 00:17:08.355 "completions": 0, 00:17:08.355 "requests": 0, 00:17:08.355 "request_latency": 0, 00:17:08.355 "pending_free_request": 0, 00:17:08.355 "pending_rdma_read": 0, 00:17:08.355 "pending_rdma_write": 0, 00:17:08.355 "pending_rdma_send": 0, 00:17:08.355 "total_send_wrs": 0, 00:17:08.355 "send_doorbell_updates": 0, 00:17:08.355 "total_recv_wrs": 0, 00:17:08.355 "recv_doorbell_updates": 0 00:17:08.355 } 00:17:08.355 ] 00:17:08.355 } 00:17:08.355 ] 00:17:08.355 }, 00:17:08.355 { 00:17:08.355 "name": "nvmf_tgt_poll_group_001", 00:17:08.355 "admin_qpairs": 0, 00:17:08.355 "io_qpairs": 0, 00:17:08.355 "current_admin_qpairs": 0, 00:17:08.355 "current_io_qpairs": 0, 00:17:08.355 "pending_bdev_io": 0, 00:17:08.355 "completed_nvme_io": 0, 00:17:08.355 "transports": [ 00:17:08.355 { 00:17:08.355 "trtype": "RDMA", 00:17:08.355 "pending_data_buffer": 0, 00:17:08.355 "devices": [ 00:17:08.355 { 00:17:08.355 "name": "rocep175s0f0", 00:17:08.355 "polls": 1306, 00:17:08.355 "idle_polls": 1306, 00:17:08.355 "completions": 0, 00:17:08.355 "requests": 0, 00:17:08.355 "request_latency": 0, 00:17:08.355 "pending_free_request": 0, 00:17:08.355 "pending_rdma_read": 0, 00:17:08.355 "pending_rdma_write": 0, 00:17:08.355 "pending_rdma_send": 0, 00:17:08.356 "total_send_wrs": 0, 00:17:08.356 "send_doorbell_updates": 0, 00:17:08.356 "total_recv_wrs": 0, 00:17:08.356 "recv_doorbell_updates": 0 00:17:08.356 }, 00:17:08.356 { 00:17:08.356 "name": "rocep175s0f1", 00:17:08.356 "polls": 1306, 00:17:08.356 "idle_polls": 1306, 00:17:08.356 "completions": 0, 00:17:08.356 "requests": 0, 00:17:08.356 "request_latency": 0, 00:17:08.356 "pending_free_request": 0, 00:17:08.356 "pending_rdma_read": 0, 00:17:08.356 "pending_rdma_write": 0, 00:17:08.356 "pending_rdma_send": 0, 00:17:08.356 "total_send_wrs": 0, 00:17:08.356 "send_doorbell_updates": 0, 00:17:08.356 "total_recv_wrs": 0, 00:17:08.356 "recv_doorbell_updates": 0 00:17:08.356 } 00:17:08.356 ] 00:17:08.356 } 00:17:08.356 ] 00:17:08.356 }, 00:17:08.356 { 00:17:08.356 "name": "nvmf_tgt_poll_group_002", 00:17:08.356 "admin_qpairs": 0, 00:17:08.356 "io_qpairs": 0, 00:17:08.356 "current_admin_qpairs": 0, 00:17:08.356 "current_io_qpairs": 0, 00:17:08.356 "pending_bdev_io": 0, 00:17:08.356 "completed_nvme_io": 0, 00:17:08.356 "transports": [ 00:17:08.356 { 00:17:08.356 "trtype": "RDMA", 00:17:08.356 "pending_data_buffer": 0, 00:17:08.356 "devices": [ 00:17:08.356 { 00:17:08.356 "name": "rocep175s0f0", 00:17:08.356 "polls": 1200, 00:17:08.356 "idle_polls": 1200, 00:17:08.356 "completions": 0, 00:17:08.356 "requests": 0, 00:17:08.356 "request_latency": 0, 00:17:08.356 "pending_free_request": 0, 00:17:08.356 "pending_rdma_read": 0, 00:17:08.356 "pending_rdma_write": 0, 00:17:08.356 "pending_rdma_send": 0, 00:17:08.356 "total_send_wrs": 0, 00:17:08.356 "send_doorbell_updates": 0, 00:17:08.356 "total_recv_wrs": 0, 00:17:08.356 "recv_doorbell_updates": 0 00:17:08.356 }, 00:17:08.356 { 00:17:08.356 "name": "rocep175s0f1", 00:17:08.356 "polls": 1200, 00:17:08.356 "idle_polls": 1200, 00:17:08.356 "completions": 0, 00:17:08.356 "requests": 0, 00:17:08.356 "request_latency": 0, 00:17:08.356 "pending_free_request": 0, 00:17:08.356 "pending_rdma_read": 0, 00:17:08.356 "pending_rdma_write": 0, 00:17:08.356 "pending_rdma_send": 0, 00:17:08.356 "total_send_wrs": 0, 00:17:08.356 "send_doorbell_updates": 0, 00:17:08.356 "total_recv_wrs": 0, 00:17:08.356 "recv_doorbell_updates": 0 00:17:08.356 } 00:17:08.356 ] 00:17:08.356 } 00:17:08.356 ] 00:17:08.356 }, 00:17:08.356 { 00:17:08.356 "name": "nvmf_tgt_poll_group_003", 00:17:08.356 "admin_qpairs": 0, 00:17:08.356 "io_qpairs": 0, 00:17:08.356 "current_admin_qpairs": 0, 00:17:08.356 "current_io_qpairs": 0, 00:17:08.356 "pending_bdev_io": 0, 00:17:08.356 "completed_nvme_io": 0, 00:17:08.356 "transports": [ 00:17:08.356 { 00:17:08.356 "trtype": "RDMA", 00:17:08.356 "pending_data_buffer": 0, 00:17:08.356 "devices": [ 00:17:08.356 { 00:17:08.356 "name": "rocep175s0f0", 00:17:08.356 "polls": 841, 00:17:08.356 "idle_polls": 841, 00:17:08.356 "completions": 0, 00:17:08.356 "requests": 0, 00:17:08.356 "request_latency": 0, 00:17:08.356 "pending_free_request": 0, 00:17:08.356 "pending_rdma_read": 0, 00:17:08.356 "pending_rdma_write": 0, 00:17:08.356 "pending_rdma_send": 0, 00:17:08.356 "total_send_wrs": 0, 00:17:08.356 "send_doorbell_updates": 0, 00:17:08.356 "total_recv_wrs": 0, 00:17:08.356 "recv_doorbell_updates": 0 00:17:08.356 }, 00:17:08.356 { 00:17:08.356 "name": "rocep175s0f1", 00:17:08.356 "polls": 841, 00:17:08.356 "idle_polls": 841, 00:17:08.356 "completions": 0, 00:17:08.356 "requests": 0, 00:17:08.356 "request_latency": 0, 00:17:08.356 "pending_free_request": 0, 00:17:08.356 "pending_rdma_read": 0, 00:17:08.356 "pending_rdma_write": 0, 00:17:08.356 "pending_rdma_send": 0, 00:17:08.356 "total_send_wrs": 0, 00:17:08.356 "send_doorbell_updates": 0, 00:17:08.356 "total_recv_wrs": 0, 00:17:08.356 "recv_doorbell_updates": 0 00:17:08.356 } 00:17:08.356 ] 00:17:08.356 } 00:17:08.356 ] 00:17:08.356 } 00:17:08.356 ] 00:17:08.356 }' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.356 Malloc1 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:08.356 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.357 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.357 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.357 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:08.357 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.357 01:01:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.357 [2024-11-19 01:01:15.010433] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:08.357 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:17:08.619 [2024-11-19 01:01:15.048111] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:17:08.619 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:08.619 could not add new controller: failed to write to nvme-fabrics device 00:17:08.619 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:08.619 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.619 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.619 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.619 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:08.619 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.619 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.619 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.619 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:08.889 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:08.889 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:08.889 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:08.889 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:08.889 01:01:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:10.799 01:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:10.799 01:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:10.799 01:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:10.799 01:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:10.799 01:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:10.799 01:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:10.799 01:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:11.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:11.738 [2024-11-19 01:01:18.297572] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:17:11.738 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:11.738 could not add new controller: failed to write to nvme-fabrics device 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.738 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:12.001 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:12.001 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:12.001 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.001 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:12.001 01:01:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:13.990 01:01:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:13.990 01:01:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:13.990 01:01:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.990 01:01:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:13.990 01:01:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.990 01:01:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:13.990 01:01:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:14.951 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.952 [2024-11-19 01:01:21.533744] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.952 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:15.219 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:15.219 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:15.219 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.219 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:15.219 01:01:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:17.161 01:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:17.161 01:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:17.161 01:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.161 01:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:17.161 01:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.161 01:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:17.161 01:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.131 [2024-11-19 01:01:24.727553] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.131 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.132 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:18.391 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.391 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:18.391 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.391 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:18.391 01:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:20.317 01:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:20.317 01:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:20.317 01:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.317 01:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:20.317 01:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.317 01:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:20.317 01:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.278 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.279 [2024-11-19 01:01:27.914289] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.279 01:01:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:21.537 01:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:21.537 01:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:21.537 01:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.537 01:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:21.537 01:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:24.082 01:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:24.082 01:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:24.082 01:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.082 01:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:24.082 01:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.082 01:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:24.082 01:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.657 [2024-11-19 01:01:31.122247] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.657 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:24.923 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:24.923 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:24.923 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:24.923 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:24.923 01:01:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:26.924 01:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:26.924 01:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:26.924 01:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:26.924 01:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:26.924 01:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:26.924 01:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:26.924 01:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.931 [2024-11-19 01:01:34.321193] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:27.931 01:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:30.521 01:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:30.521 01:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:30.521 01:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:30.521 01:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:30.521 01:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.521 01:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:30.521 01:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:30.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.802 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 [2024-11-19 01:01:37.524710] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 [2024-11-19 01:01:37.576875] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.182 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 [2024-11-19 01:01:37.629104] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 [2024-11-19 01:01:37.681303] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 [2024-11-19 01:01:37.733486] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.183 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.484 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.484 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.484 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.484 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.484 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.484 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:31.484 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.484 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.484 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.484 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:31.484 "tick_rate": 2100000000, 00:17:31.484 "poll_groups": [ 00:17:31.484 { 00:17:31.484 "name": "nvmf_tgt_poll_group_000", 00:17:31.484 "admin_qpairs": 2, 00:17:31.484 "io_qpairs": 27, 00:17:31.484 "current_admin_qpairs": 0, 00:17:31.484 "current_io_qpairs": 0, 00:17:31.484 "pending_bdev_io": 0, 00:17:31.484 "completed_nvme_io": 127, 00:17:31.484 "transports": [ 00:17:31.484 { 00:17:31.484 "trtype": "RDMA", 00:17:31.484 "pending_data_buffer": 0, 00:17:31.484 "devices": [ 00:17:31.484 { 00:17:31.484 "name": "rocep175s0f0", 00:17:31.484 "polls": 2548600, 00:17:31.484 "idle_polls": 2548166, 00:17:31.484 "completions": 3907, 00:17:31.484 "requests": 3728, 00:17:31.484 "request_latency": 497494544, 00:17:31.484 "pending_free_request": 0, 00:17:31.484 "pending_rdma_read": 0, 00:17:31.484 "pending_rdma_write": 0, 00:17:31.484 "pending_rdma_send": 0, 00:17:31.484 "total_send_wrs": 303, 00:17:31.484 "send_doorbell_updates": 158, 00:17:31.484 "total_recv_wrs": 3728, 00:17:31.484 "recv_doorbell_updates": 185 00:17:31.484 }, 00:17:31.484 { 00:17:31.484 "name": "rocep175s0f1", 00:17:31.484 "polls": 2548600, 00:17:31.484 "idle_polls": 2548600, 00:17:31.484 "completions": 0, 00:17:31.484 "requests": 0, 00:17:31.484 "request_latency": 0, 00:17:31.484 "pending_free_request": 0, 00:17:31.484 "pending_rdma_read": 0, 00:17:31.484 "pending_rdma_write": 0, 00:17:31.484 "pending_rdma_send": 0, 00:17:31.484 "total_send_wrs": 0, 00:17:31.484 "send_doorbell_updates": 0, 00:17:31.484 "total_recv_wrs": 0, 00:17:31.484 "recv_doorbell_updates": 0 00:17:31.484 } 00:17:31.484 ] 00:17:31.485 } 00:17:31.485 ] 00:17:31.485 }, 00:17:31.485 { 00:17:31.485 "name": "nvmf_tgt_poll_group_001", 00:17:31.485 "admin_qpairs": 2, 00:17:31.485 "io_qpairs": 26, 00:17:31.485 "current_admin_qpairs": 0, 00:17:31.485 "current_io_qpairs": 0, 00:17:31.485 "pending_bdev_io": 0, 00:17:31.485 "completed_nvme_io": 125, 00:17:31.485 "transports": [ 00:17:31.485 { 00:17:31.485 "trtype": "RDMA", 00:17:31.485 "pending_data_buffer": 0, 00:17:31.485 "devices": [ 00:17:31.485 { 00:17:31.485 "name": "rocep175s0f0", 00:17:31.485 "polls": 2632056, 00:17:31.485 "idle_polls": 2631636, 00:17:31.485 "completions": 3740, 00:17:31.485 "requests": 3565, 00:17:31.485 "request_latency": 469594846, 00:17:31.485 "pending_free_request": 0, 00:17:31.485 "pending_rdma_read": 0, 00:17:31.485 "pending_rdma_write": 0, 00:17:31.485 "pending_rdma_send": 0, 00:17:31.485 "total_send_wrs": 298, 00:17:31.485 "send_doorbell_updates": 150, 00:17:31.485 "total_recv_wrs": 3565, 00:17:31.485 "recv_doorbell_updates": 176 00:17:31.485 }, 00:17:31.485 { 00:17:31.485 "name": "rocep175s0f1", 00:17:31.485 "polls": 2632056, 00:17:31.485 "idle_polls": 2632056, 00:17:31.485 "completions": 0, 00:17:31.485 "requests": 0, 00:17:31.485 "request_latency": 0, 00:17:31.485 "pending_free_request": 0, 00:17:31.485 "pending_rdma_read": 0, 00:17:31.485 "pending_rdma_write": 0, 00:17:31.485 "pending_rdma_send": 0, 00:17:31.485 "total_send_wrs": 0, 00:17:31.485 "send_doorbell_updates": 0, 00:17:31.485 "total_recv_wrs": 0, 00:17:31.485 "recv_doorbell_updates": 0 00:17:31.485 } 00:17:31.485 ] 00:17:31.485 } 00:17:31.485 ] 00:17:31.485 }, 00:17:31.485 { 00:17:31.485 "name": "nvmf_tgt_poll_group_002", 00:17:31.485 "admin_qpairs": 1, 00:17:31.485 "io_qpairs": 26, 00:17:31.485 "current_admin_qpairs": 0, 00:17:31.485 "current_io_qpairs": 0, 00:17:31.485 "pending_bdev_io": 0, 00:17:31.485 "completed_nvme_io": 127, 00:17:31.485 "transports": [ 00:17:31.485 { 00:17:31.485 "trtype": "RDMA", 00:17:31.485 "pending_data_buffer": 0, 00:17:31.485 "devices": [ 00:17:31.485 { 00:17:31.485 "name": "rocep175s0f0", 00:17:31.485 "polls": 2519676, 00:17:31.485 "idle_polls": 2519296, 00:17:31.485 "completions": 3700, 00:17:31.485 "requests": 3545, 00:17:31.485 "request_latency": 471974176, 00:17:31.485 "pending_free_request": 0, 00:17:31.485 "pending_rdma_read": 0, 00:17:31.485 "pending_rdma_write": 0, 00:17:31.485 "pending_rdma_send": 0, 00:17:31.485 "total_send_wrs": 269, 00:17:31.485 "send_doorbell_updates": 130, 00:17:31.485 "total_recv_wrs": 3545, 00:17:31.485 "recv_doorbell_updates": 156 00:17:31.485 }, 00:17:31.485 { 00:17:31.485 "name": "rocep175s0f1", 00:17:31.485 "polls": 2519676, 00:17:31.485 "idle_polls": 2519676, 00:17:31.485 "completions": 0, 00:17:31.485 "requests": 0, 00:17:31.485 "request_latency": 0, 00:17:31.485 "pending_free_request": 0, 00:17:31.485 "pending_rdma_read": 0, 00:17:31.485 "pending_rdma_write": 0, 00:17:31.485 "pending_rdma_send": 0, 00:17:31.485 "total_send_wrs": 0, 00:17:31.485 "send_doorbell_updates": 0, 00:17:31.485 "total_recv_wrs": 0, 00:17:31.485 "recv_doorbell_updates": 0 00:17:31.485 } 00:17:31.485 ] 00:17:31.485 } 00:17:31.485 ] 00:17:31.485 }, 00:17:31.485 { 00:17:31.485 "name": "nvmf_tgt_poll_group_003", 00:17:31.485 "admin_qpairs": 2, 00:17:31.485 "io_qpairs": 26, 00:17:31.485 "current_admin_qpairs": 0, 00:17:31.485 "current_io_qpairs": 0, 00:17:31.485 "pending_bdev_io": 0, 00:17:31.485 "completed_nvme_io": 76, 00:17:31.485 "transports": [ 00:17:31.485 { 00:17:31.485 "trtype": "RDMA", 00:17:31.485 "pending_data_buffer": 0, 00:17:31.485 "devices": [ 00:17:31.485 { 00:17:31.485 "name": "rocep175s0f0", 00:17:31.485 "polls": 1976830, 00:17:31.485 "idle_polls": 1976491, 00:17:31.485 "completions": 3642, 00:17:31.485 "requests": 3516, 00:17:31.485 "request_latency": 456205012, 00:17:31.485 "pending_free_request": 0, 00:17:31.485 "pending_rdma_read": 0, 00:17:31.485 "pending_rdma_write": 0, 00:17:31.485 "pending_rdma_send": 0, 00:17:31.485 "total_send_wrs": 200, 00:17:31.485 "send_doorbell_updates": 115, 00:17:31.485 "total_recv_wrs": 3516, 00:17:31.485 "recv_doorbell_updates": 141 00:17:31.485 }, 00:17:31.485 { 00:17:31.485 "name": "rocep175s0f1", 00:17:31.485 "polls": 1976830, 00:17:31.485 "idle_polls": 1976830, 00:17:31.485 "completions": 0, 00:17:31.485 "requests": 0, 00:17:31.485 "request_latency": 0, 00:17:31.485 "pending_free_request": 0, 00:17:31.485 "pending_rdma_read": 0, 00:17:31.485 "pending_rdma_write": 0, 00:17:31.485 "pending_rdma_send": 0, 00:17:31.485 "total_send_wrs": 0, 00:17:31.485 "send_doorbell_updates": 0, 00:17:31.485 "total_recv_wrs": 0, 00:17:31.485 "recv_doorbell_updates": 0 00:17:31.485 } 00:17:31.485 ] 00:17:31.485 } 00:17:31.485 ] 00:17:31.485 } 00:17:31.485 ] 00:17:31.485 }' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 14989 > 0 )) 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 1895268578 > 0 )) 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:31.485 01:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:31.485 rmmod nvme_rdma 00:17:31.485 rmmod nvme_fabrics 00:17:31.485 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:31.485 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:31.485 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:31.485 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 324161 ']' 00:17:31.485 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 324161 00:17:31.485 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 324161 ']' 00:17:31.485 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 324161 00:17:31.485 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:31.485 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.485 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324161 00:17:31.485 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.486 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.486 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324161' 00:17:31.486 killing process with pid 324161 00:17:31.486 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 324161 00:17:31.486 01:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 324161 00:17:32.897 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:32.897 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:32.897 00:17:32.897 real 0m31.996s 00:17:32.897 user 1m43.683s 00:17:32.897 sys 0m6.185s 00:17:32.897 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.897 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.897 ************************************ 00:17:32.897 END TEST nvmf_rpc 00:17:32.897 ************************************ 00:17:32.897 01:01:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:17:32.897 01:01:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:32.897 01:01:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.897 01:01:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:32.897 ************************************ 00:17:32.897 START TEST nvmf_invalid 00:17:32.897 ************************************ 00:17:32.897 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:17:33.162 * Looking for test storage... 00:17:33.162 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.162 --rc genhtml_branch_coverage=1 00:17:33.162 --rc genhtml_function_coverage=1 00:17:33.162 --rc genhtml_legend=1 00:17:33.162 --rc geninfo_all_blocks=1 00:17:33.162 --rc geninfo_unexecuted_blocks=1 00:17:33.162 00:17:33.162 ' 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.162 --rc genhtml_branch_coverage=1 00:17:33.162 --rc genhtml_function_coverage=1 00:17:33.162 --rc genhtml_legend=1 00:17:33.162 --rc geninfo_all_blocks=1 00:17:33.162 --rc geninfo_unexecuted_blocks=1 00:17:33.162 00:17:33.162 ' 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.162 --rc genhtml_branch_coverage=1 00:17:33.162 --rc genhtml_function_coverage=1 00:17:33.162 --rc genhtml_legend=1 00:17:33.162 --rc geninfo_all_blocks=1 00:17:33.162 --rc geninfo_unexecuted_blocks=1 00:17:33.162 00:17:33.162 ' 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.162 --rc genhtml_branch_coverage=1 00:17:33.162 --rc genhtml_function_coverage=1 00:17:33.162 --rc genhtml_legend=1 00:17:33.162 --rc geninfo_all_blocks=1 00:17:33.162 --rc geninfo_unexecuted_blocks=1 00:17:33.162 00:17:33.162 ' 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.162 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.163 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:33.163 01:01:39 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.900 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:39.901 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:39.901 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@405 -- # modinfo irdma 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:39.901 Found net devices under 0000:af:00.0: cvl_0_0 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:39.901 Found net devices under 0000:af:00.1: cvl_0_1 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo cvl_0_0 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo cvl_0_1 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:39.901 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:17:39.902 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:39.902 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:39.902 altname enp175s0f0np0 00:17:39.902 altname ens801f0np0 00:17:39.902 inet 192.168.100.8/24 scope global cvl_0_0 00:17:39.902 valid_lft forever preferred_lft forever 00:17:39.902 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:39.902 valid_lft forever preferred_lft forever 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:17:39.902 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:39.902 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:39.902 altname enp175s0f1np1 00:17:39.902 altname ens801f1np1 00:17:39.902 inet 192.168.100.9/24 scope global cvl_0_1 00:17:39.902 valid_lft forever preferred_lft forever 00:17:39.902 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:39.902 valid_lft forever preferred_lft forever 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo cvl_0_0 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo cvl_0_1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:39.902 192.168.100.9' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:39.902 192.168.100.9' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:39.902 192.168.100.9' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=331758 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 331758 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 331758 ']' 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.902 01:01:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:39.902 [2024-11-19 01:01:45.664946] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:39.902 [2024-11-19 01:01:45.665037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.902 [2024-11-19 01:01:45.790110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.902 [2024-11-19 01:01:45.896085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.902 [2024-11-19 01:01:45.896131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.902 [2024-11-19 01:01:45.896141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.902 [2024-11-19 01:01:45.896153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.902 [2024-11-19 01:01:45.896161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.902 [2024-11-19 01:01:45.898329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.902 [2024-11-19 01:01:45.898399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.902 [2024-11-19 01:01:45.898418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.902 [2024-11-19 01:01:45.898443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.902 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.902 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:39.903 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.903 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.903 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:39.903 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.903 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:39.903 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13199 00:17:40.177 [2024-11-19 01:01:46.700882] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:40.177 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:40.177 { 00:17:40.177 "nqn": "nqn.2016-06.io.spdk:cnode13199", 00:17:40.177 "tgt_name": "foobar", 00:17:40.177 "method": "nvmf_create_subsystem", 00:17:40.177 "req_id": 1 00:17:40.177 } 00:17:40.177 Got JSON-RPC error response 00:17:40.177 response: 00:17:40.177 { 00:17:40.177 "code": -32603, 00:17:40.177 "message": "Unable to find target foobar" 00:17:40.177 }' 00:17:40.177 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:40.177 { 00:17:40.177 "nqn": "nqn.2016-06.io.spdk:cnode13199", 00:17:40.177 "tgt_name": "foobar", 00:17:40.177 "method": "nvmf_create_subsystem", 00:17:40.177 "req_id": 1 00:17:40.177 } 00:17:40.177 Got JSON-RPC error response 00:17:40.177 response: 00:17:40.177 { 00:17:40.177 "code": -32603, 00:17:40.178 "message": "Unable to find target foobar" 00:17:40.178 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:40.178 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:40.178 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14036 00:17:40.465 [2024-11-19 01:01:46.909635] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14036: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:40.465 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:40.465 { 00:17:40.465 "nqn": "nqn.2016-06.io.spdk:cnode14036", 00:17:40.465 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:40.465 "method": "nvmf_create_subsystem", 00:17:40.465 "req_id": 1 00:17:40.465 } 00:17:40.465 Got JSON-RPC error response 00:17:40.465 response: 00:17:40.465 { 00:17:40.465 "code": -32602, 00:17:40.465 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:40.465 }' 00:17:40.465 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:40.465 { 00:17:40.465 "nqn": "nqn.2016-06.io.spdk:cnode14036", 00:17:40.465 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:40.465 "method": "nvmf_create_subsystem", 00:17:40.465 "req_id": 1 00:17:40.465 } 00:17:40.465 Got JSON-RPC error response 00:17:40.465 response: 00:17:40.465 { 00:17:40.465 "code": -32602, 00:17:40.465 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:40.465 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:40.465 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:40.465 01:01:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12460 00:17:40.465 [2024-11-19 01:01:47.106311] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12460: invalid model number 'SPDK_Controller' 00:17:40.465 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:40.465 { 00:17:40.465 "nqn": "nqn.2016-06.io.spdk:cnode12460", 00:17:40.465 "model_number": "SPDK_Controller\u001f", 00:17:40.465 "method": "nvmf_create_subsystem", 00:17:40.465 "req_id": 1 00:17:40.465 } 00:17:40.465 Got JSON-RPC error response 00:17:40.465 response: 00:17:40.465 { 00:17:40.465 "code": -32602, 00:17:40.465 "message": "Invalid MN SPDK_Controller\u001f" 00:17:40.465 }' 00:17:40.465 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:40.465 { 00:17:40.465 "nqn": "nqn.2016-06.io.spdk:cnode12460", 00:17:40.465 "model_number": "SPDK_Controller\u001f", 00:17:40.465 "method": "nvmf_create_subsystem", 00:17:40.465 "req_id": 1 00:17:40.465 } 00:17:40.465 Got JSON-RPC error response 00:17:40.465 response: 00:17:40.465 { 00:17:40.465 "code": -32602, 00:17:40.465 "message": "Invalid MN SPDK_Controller\u001f" 00:17:40.465 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:40.465 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:40.465 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:40.465 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:40.465 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:40.760 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'p/g6Fld{c8Rp`1kG}i\S' 00:17:40.761 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'p/g6Fld{c8Rp`1kG}i\S' nqn.2016-06.io.spdk:cnode4819 00:17:41.049 [2024-11-19 01:01:47.447530] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4819: invalid serial number 'p/g6Fld{c8Rp`1kG}i\S' 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:41.049 { 00:17:41.049 "nqn": "nqn.2016-06.io.spdk:cnode4819", 00:17:41.049 "serial_number": "p/g6\u007fFld{c8Rp`1kG}i\\S", 00:17:41.049 "method": "nvmf_create_subsystem", 00:17:41.049 "req_id": 1 00:17:41.049 } 00:17:41.049 Got JSON-RPC error response 00:17:41.049 response: 00:17:41.049 { 00:17:41.049 "code": -32602, 00:17:41.049 "message": "Invalid SN p/g6\u007fFld{c8Rp`1kG}i\\S" 00:17:41.049 }' 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:41.049 { 00:17:41.049 "nqn": "nqn.2016-06.io.spdk:cnode4819", 00:17:41.049 "serial_number": "p/g6\u007fFld{c8Rp`1kG}i\\S", 00:17:41.049 "method": "nvmf_create_subsystem", 00:17:41.049 "req_id": 1 00:17:41.049 } 00:17:41.049 Got JSON-RPC error response 00:17:41.049 response: 00:17:41.049 { 00:17:41.049 "code": -32602, 00:17:41.049 "message": "Invalid SN p/g6\u007fFld{c8Rp`1kG}i\\S" 00:17:41.049 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.049 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.050 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.051 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:41.351 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:41.351 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:41.351 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'v>WP+@d2yQ:D:q_/.!\ t,oQ-t4];lI{mL7Lu}XZK' 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'v>WP+@d2yQ:D:q_/.!\ t,oQ-t4];lI{mL7Lu}XZK' nqn.2016-06.io.spdk:cnode26369 00:17:41.352 [2024-11-19 01:01:47.929244] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26369: invalid model number 'v>WP+@d2yQ:D:q_/.!\ t,oQ-t4];lI{mL7Lu}XZK' 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:41.352 { 00:17:41.352 "nqn": "nqn.2016-06.io.spdk:cnode26369", 00:17:41.352 "model_number": "v>WP+@d2yQ:D:q_/.!\\ t,oQ-t4];lI{mL7Lu}XZK", 00:17:41.352 "method": "nvmf_create_subsystem", 00:17:41.352 "req_id": 1 00:17:41.352 } 00:17:41.352 Got JSON-RPC error response 00:17:41.352 response: 00:17:41.352 { 00:17:41.352 "code": -32602, 00:17:41.352 "message": "Invalid MN v>WP+@d2yQ:D:q_/.!\\ t,oQ-t4];lI{mL7Lu}XZK" 00:17:41.352 }' 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:41.352 { 00:17:41.352 "nqn": "nqn.2016-06.io.spdk:cnode26369", 00:17:41.352 "model_number": "v>WP+@d2yQ:D:q_/.!\\ t,oQ-t4];lI{mL7Lu}XZK", 00:17:41.352 "method": "nvmf_create_subsystem", 00:17:41.352 "req_id": 1 00:17:41.352 } 00:17:41.352 Got JSON-RPC error response 00:17:41.352 response: 00:17:41.352 { 00:17:41.352 "code": -32602, 00:17:41.352 "message": "Invalid MN v>WP+@d2yQ:D:q_/.!\\ t,oQ-t4];lI{mL7Lu}XZK" 00:17:41.352 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:41.352 01:01:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:17:41.642 [2024-11-19 01:01:48.159184] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:17:41.642 [2024-11-19 01:01:48.168627] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:17:41.642 [2024-11-19 01:01:48.168661] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:17:41.642 [2024-11-19 01:01:48.171552] iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 257/767 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:17:41.642 [2024-11-19 01:01:48.171585] iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:17:41.642 [2024-11-19 01:01:48.172176] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:17:41.642 [2024-11-19 01:01:48.173432] iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 257/767 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:17:41.642 [2024-11-19 01:01:48.173463] iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:17:41.642 [2024-11-19 01:01:48.174048] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:17:41.642 [2024-11-19 01:01:48.175237] iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 257/767 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:17:41.642 [2024-11-19 01:01:48.175267] iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:17:41.642 [2024-11-19 01:01:48.175855] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:17:41.642 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:41.922 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:17:41.922 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:17:41.922 192.168.100.9' 00:17:41.922 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:41.922 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:17:41.922 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:17:41.922 [2024-11-19 01:01:48.574424] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:41.922 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:41.922 { 00:17:41.922 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:41.922 "listen_address": { 00:17:41.922 "trtype": "rdma", 00:17:41.923 "traddr": "192.168.100.8", 00:17:41.923 "trsvcid": "4421" 00:17:41.923 }, 00:17:41.923 "method": "nvmf_subsystem_remove_listener", 00:17:41.923 "req_id": 1 00:17:41.923 } 00:17:41.923 Got JSON-RPC error response 00:17:41.923 response: 00:17:41.923 { 00:17:41.923 "code": -32602, 00:17:41.923 "message": "Invalid parameters" 00:17:41.923 }' 00:17:41.923 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:41.923 { 00:17:41.923 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:41.923 "listen_address": { 00:17:41.923 "trtype": "rdma", 00:17:41.923 "traddr": "192.168.100.8", 00:17:41.923 "trsvcid": "4421" 00:17:41.923 }, 00:17:41.923 "method": "nvmf_subsystem_remove_listener", 00:17:41.923 "req_id": 1 00:17:41.923 } 00:17:41.923 Got JSON-RPC error response 00:17:41.923 response: 00:17:41.923 { 00:17:41.923 "code": -32602, 00:17:41.923 "message": "Invalid parameters" 00:17:41.923 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:41.923 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode95 -i 0 00:17:42.201 [2024-11-19 01:01:48.771058] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode95: invalid cntlid range [0-65519] 00:17:42.201 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:42.201 { 00:17:42.201 "nqn": "nqn.2016-06.io.spdk:cnode95", 00:17:42.201 "min_cntlid": 0, 00:17:42.201 "method": "nvmf_create_subsystem", 00:17:42.201 "req_id": 1 00:17:42.201 } 00:17:42.201 Got JSON-RPC error response 00:17:42.201 response: 00:17:42.201 { 00:17:42.201 "code": -32602, 00:17:42.201 "message": "Invalid cntlid range [0-65519]" 00:17:42.201 }' 00:17:42.201 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:42.201 { 00:17:42.201 "nqn": "nqn.2016-06.io.spdk:cnode95", 00:17:42.201 "min_cntlid": 0, 00:17:42.201 "method": "nvmf_create_subsystem", 00:17:42.201 "req_id": 1 00:17:42.201 } 00:17:42.201 Got JSON-RPC error response 00:17:42.201 response: 00:17:42.201 { 00:17:42.201 "code": -32602, 00:17:42.201 "message": "Invalid cntlid range [0-65519]" 00:17:42.201 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:42.201 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10111 -i 65520 00:17:42.461 [2024-11-19 01:01:48.963776] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10111: invalid cntlid range [65520-65519] 00:17:42.461 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:42.461 { 00:17:42.461 "nqn": "nqn.2016-06.io.spdk:cnode10111", 00:17:42.461 "min_cntlid": 65520, 00:17:42.461 "method": "nvmf_create_subsystem", 00:17:42.461 "req_id": 1 00:17:42.461 } 00:17:42.461 Got JSON-RPC error response 00:17:42.461 response: 00:17:42.461 { 00:17:42.461 "code": -32602, 00:17:42.461 "message": "Invalid cntlid range [65520-65519]" 00:17:42.461 }' 00:17:42.461 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:42.461 { 00:17:42.461 "nqn": "nqn.2016-06.io.spdk:cnode10111", 00:17:42.461 "min_cntlid": 65520, 00:17:42.461 "method": "nvmf_create_subsystem", 00:17:42.461 "req_id": 1 00:17:42.461 } 00:17:42.461 Got JSON-RPC error response 00:17:42.461 response: 00:17:42.461 { 00:17:42.461 "code": -32602, 00:17:42.461 "message": "Invalid cntlid range [65520-65519]" 00:17:42.461 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:42.461 01:01:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24489 -I 0 00:17:42.461 [2024-11-19 01:01:49.152517] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24489: invalid cntlid range [1-0] 00:17:42.720 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:42.720 { 00:17:42.720 "nqn": "nqn.2016-06.io.spdk:cnode24489", 00:17:42.720 "max_cntlid": 0, 00:17:42.720 "method": "nvmf_create_subsystem", 00:17:42.720 "req_id": 1 00:17:42.720 } 00:17:42.720 Got JSON-RPC error response 00:17:42.720 response: 00:17:42.720 { 00:17:42.720 "code": -32602, 00:17:42.720 "message": "Invalid cntlid range [1-0]" 00:17:42.720 }' 00:17:42.720 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:42.720 { 00:17:42.720 "nqn": "nqn.2016-06.io.spdk:cnode24489", 00:17:42.720 "max_cntlid": 0, 00:17:42.720 "method": "nvmf_create_subsystem", 00:17:42.720 "req_id": 1 00:17:42.720 } 00:17:42.720 Got JSON-RPC error response 00:17:42.720 response: 00:17:42.720 { 00:17:42.720 "code": -32602, 00:17:42.720 "message": "Invalid cntlid range [1-0]" 00:17:42.720 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:42.720 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31749 -I 65520 00:17:42.720 [2024-11-19 01:01:49.373328] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31749: invalid cntlid range [1-65520] 00:17:42.720 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:42.720 { 00:17:42.720 "nqn": "nqn.2016-06.io.spdk:cnode31749", 00:17:42.720 "max_cntlid": 65520, 00:17:42.720 "method": "nvmf_create_subsystem", 00:17:42.720 "req_id": 1 00:17:42.720 } 00:17:42.720 Got JSON-RPC error response 00:17:42.720 response: 00:17:42.720 { 00:17:42.720 "code": -32602, 00:17:42.720 "message": "Invalid cntlid range [1-65520]" 00:17:42.720 }' 00:17:42.720 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:42.720 { 00:17:42.720 "nqn": "nqn.2016-06.io.spdk:cnode31749", 00:17:42.720 "max_cntlid": 65520, 00:17:42.720 "method": "nvmf_create_subsystem", 00:17:42.720 "req_id": 1 00:17:42.720 } 00:17:42.720 Got JSON-RPC error response 00:17:42.720 response: 00:17:42.720 { 00:17:42.720 "code": -32602, 00:17:42.720 "message": "Invalid cntlid range [1-65520]" 00:17:42.720 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:42.720 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15067 -i 6 -I 5 00:17:42.978 [2024-11-19 01:01:49.590130] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15067: invalid cntlid range [6-5] 00:17:42.978 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:42.978 { 00:17:42.978 "nqn": "nqn.2016-06.io.spdk:cnode15067", 00:17:42.978 "min_cntlid": 6, 00:17:42.978 "max_cntlid": 5, 00:17:42.978 "method": "nvmf_create_subsystem", 00:17:42.978 "req_id": 1 00:17:42.978 } 00:17:42.978 Got JSON-RPC error response 00:17:42.978 response: 00:17:42.978 { 00:17:42.978 "code": -32602, 00:17:42.978 "message": "Invalid cntlid range [6-5]" 00:17:42.978 }' 00:17:42.978 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:42.978 { 00:17:42.978 "nqn": "nqn.2016-06.io.spdk:cnode15067", 00:17:42.978 "min_cntlid": 6, 00:17:42.979 "max_cntlid": 5, 00:17:42.979 "method": "nvmf_create_subsystem", 00:17:42.979 "req_id": 1 00:17:42.979 } 00:17:42.979 Got JSON-RPC error response 00:17:42.979 response: 00:17:42.979 { 00:17:42.979 "code": -32602, 00:17:42.979 "message": "Invalid cntlid range [6-5]" 00:17:42.979 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:42.979 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:43.238 { 00:17:43.238 "name": "foobar", 00:17:43.238 "method": "nvmf_delete_target", 00:17:43.238 "req_id": 1 00:17:43.238 } 00:17:43.238 Got JSON-RPC error response 00:17:43.238 response: 00:17:43.238 { 00:17:43.238 "code": -32602, 00:17:43.238 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:43.238 }' 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:43.238 { 00:17:43.238 "name": "foobar", 00:17:43.238 "method": "nvmf_delete_target", 00:17:43.238 "req_id": 1 00:17:43.238 } 00:17:43.238 Got JSON-RPC error response 00:17:43.238 response: 00:17:43.238 { 00:17:43.238 "code": -32602, 00:17:43.238 "message": "The specified target doesn't exist, cannot delete it." 00:17:43.238 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:43.238 rmmod nvme_rdma 00:17:43.238 rmmod nvme_fabrics 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 331758 ']' 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 331758 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 331758 ']' 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 331758 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331758 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331758' 00:17:43.238 killing process with pid 331758 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 331758 00:17:43.238 01:01:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 331758 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:44.618 00:17:44.618 real 0m11.466s 00:17:44.618 user 0m24.334s 00:17:44.618 sys 0m5.387s 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:44.618 ************************************ 00:17:44.618 END TEST nvmf_invalid 00:17:44.618 ************************************ 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.618 ************************************ 00:17:44.618 START TEST nvmf_connect_stress 00:17:44.618 ************************************ 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:17:44.618 * Looking for test storage... 00:17:44.618 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:44.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.618 --rc genhtml_branch_coverage=1 00:17:44.618 --rc genhtml_function_coverage=1 00:17:44.618 --rc genhtml_legend=1 00:17:44.618 --rc geninfo_all_blocks=1 00:17:44.618 --rc geninfo_unexecuted_blocks=1 00:17:44.618 00:17:44.618 ' 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:44.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.618 --rc genhtml_branch_coverage=1 00:17:44.618 --rc genhtml_function_coverage=1 00:17:44.618 --rc genhtml_legend=1 00:17:44.618 --rc geninfo_all_blocks=1 00:17:44.618 --rc geninfo_unexecuted_blocks=1 00:17:44.618 00:17:44.618 ' 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:44.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.618 --rc genhtml_branch_coverage=1 00:17:44.618 --rc genhtml_function_coverage=1 00:17:44.618 --rc genhtml_legend=1 00:17:44.618 --rc geninfo_all_blocks=1 00:17:44.618 --rc geninfo_unexecuted_blocks=1 00:17:44.618 00:17:44.618 ' 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:44.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.618 --rc genhtml_branch_coverage=1 00:17:44.618 --rc genhtml_function_coverage=1 00:17:44.618 --rc genhtml_legend=1 00:17:44.618 --rc geninfo_all_blocks=1 00:17:44.618 --rc geninfo_unexecuted_blocks=1 00:17:44.618 00:17:44.618 ' 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.618 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:44.619 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:44.619 01:01:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:51.194 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:51.195 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:51.195 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@405 -- # modinfo irdma 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:51.195 Found net devices under 0000:af:00.0: cvl_0_0 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:51.195 Found net devices under 0000:af:00.1: cvl_0_1 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:51.195 01:01:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo cvl_0_0 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo cvl_0_1 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:17:51.195 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:17:51.196 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:51.196 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:51.196 altname enp175s0f0np0 00:17:51.196 altname ens801f0np0 00:17:51.196 inet 192.168.100.8/24 scope global cvl_0_0 00:17:51.196 valid_lft forever preferred_lft forever 00:17:51.196 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:51.196 valid_lft forever preferred_lft forever 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:17:51.196 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:51.196 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:51.196 altname enp175s0f1np1 00:17:51.196 altname ens801f1np1 00:17:51.196 inet 192.168.100.9/24 scope global cvl_0_1 00:17:51.196 valid_lft forever preferred_lft forever 00:17:51.196 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:51.196 valid_lft forever preferred_lft forever 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo cvl_0_0 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo cvl_0_1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:51.196 192.168.100.9' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:51.196 192.168.100.9' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:51.196 192.168.100.9' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=335892 00:17:51.196 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 335892 00:17:51.197 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:51.197 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 335892 ']' 00:17:51.197 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.197 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.197 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.197 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.197 01:01:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.197 [2024-11-19 01:01:57.261852] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:51.197 [2024-11-19 01:01:57.261951] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.197 [2024-11-19 01:01:57.387714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:51.197 [2024-11-19 01:01:57.490878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.197 [2024-11-19 01:01:57.490923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.197 [2024-11-19 01:01:57.490933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.197 [2024-11-19 01:01:57.490943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.197 [2024-11-19 01:01:57.490951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.197 [2024-11-19 01:01:57.493133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.197 [2024-11-19 01:01:57.493192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.197 [2024-11-19 01:01:57.493207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.457 [2024-11-19 01:01:58.124928] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000028fc0/0x617000007c40) succeed. 00:17:51.457 [2024-11-19 01:01:58.134422] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029140/0x617000007fc0) succeed. 00:17:51.457 [2024-11-19 01:01:58.134451] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.457 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.717 [2024-11-19 01:01:58.154765] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.717 NULL1 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=336137 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.717 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.976 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.976 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:51.976 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.976 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.976 01:01:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.544 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.544 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:52.544 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.544 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.544 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.803 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.803 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:52.803 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.803 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.803 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.371 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.371 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:53.371 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.371 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.371 01:01:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.630 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.630 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:53.630 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.630 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.630 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.888 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.888 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:53.888 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.889 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.889 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.457 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.457 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:54.457 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.457 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.457 01:02:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.716 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.716 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:54.716 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.716 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.716 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.974 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.974 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:54.974 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.974 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.974 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.542 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.542 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:55.542 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.542 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.542 01:02:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.800 01:02:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.800 01:02:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:55.800 01:02:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.800 01:02:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.800 01:02:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.368 01:02:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.368 01:02:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:56.368 01:02:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.368 01:02:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.368 01:02:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.627 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.627 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:56.627 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.627 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.627 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.886 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.886 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:56.886 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.886 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.886 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.454 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.454 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:57.454 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.454 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.454 01:02:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.713 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.713 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:57.713 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.713 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.713 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.972 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.972 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:57.972 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.972 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.972 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.540 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.540 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:58.540 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.540 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.540 01:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.799 01:02:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.799 01:02:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:58.799 01:02:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.799 01:02:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.799 01:02:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.059 01:02:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.059 01:02:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:59.059 01:02:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.059 01:02:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.059 01:02:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.627 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.627 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:59.627 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.627 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.627 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.887 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.887 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:17:59.887 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.887 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.887 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.454 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.454 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:18:00.455 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.455 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.455 01:02:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.714 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.714 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:18:00.714 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.714 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.714 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.973 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.973 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:18:00.973 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.973 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.973 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.541 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.541 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:18:01.541 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.541 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.541 01:02:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.801 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.801 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:18:01.801 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.801 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.801 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.060 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 336137 00:18:02.060 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (336137) - No such process 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 336137 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:02.060 rmmod nvme_rdma 00:18:02.060 rmmod nvme_fabrics 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 335892 ']' 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 335892 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 335892 ']' 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 335892 00:18:02.060 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:02.320 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.320 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335892 00:18:02.320 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:02.320 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:02.320 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335892' 00:18:02.320 killing process with pid 335892 00:18:02.320 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 335892 00:18:02.320 01:02:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 335892 00:18:03.701 01:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:03.701 01:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:03.701 00:18:03.701 real 0m18.887s 00:18:03.702 user 0m43.889s 00:18:03.702 sys 0m9.595s 00:18:03.702 01:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.702 01:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.702 ************************************ 00:18:03.702 END TEST nvmf_connect_stress 00:18:03.702 ************************************ 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:03.702 ************************************ 00:18:03.702 START TEST nvmf_fused_ordering 00:18:03.702 ************************************ 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:03.702 * Looking for test storage... 00:18:03.702 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:03.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.702 --rc genhtml_branch_coverage=1 00:18:03.702 --rc genhtml_function_coverage=1 00:18:03.702 --rc genhtml_legend=1 00:18:03.702 --rc geninfo_all_blocks=1 00:18:03.702 --rc geninfo_unexecuted_blocks=1 00:18:03.702 00:18:03.702 ' 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:03.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.702 --rc genhtml_branch_coverage=1 00:18:03.702 --rc genhtml_function_coverage=1 00:18:03.702 --rc genhtml_legend=1 00:18:03.702 --rc geninfo_all_blocks=1 00:18:03.702 --rc geninfo_unexecuted_blocks=1 00:18:03.702 00:18:03.702 ' 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:03.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.702 --rc genhtml_branch_coverage=1 00:18:03.702 --rc genhtml_function_coverage=1 00:18:03.702 --rc genhtml_legend=1 00:18:03.702 --rc geninfo_all_blocks=1 00:18:03.702 --rc geninfo_unexecuted_blocks=1 00:18:03.702 00:18:03.702 ' 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:03.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.702 --rc genhtml_branch_coverage=1 00:18:03.702 --rc genhtml_function_coverage=1 00:18:03.702 --rc genhtml_legend=1 00:18:03.702 --rc geninfo_all_blocks=1 00:18:03.702 --rc geninfo_unexecuted_blocks=1 00:18:03.702 00:18:03.702 ' 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.702 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:03.703 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:03.703 01:02:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.278 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.278 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:10.278 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:10.278 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:10.278 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:10.279 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:10.279 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@405 -- # modinfo irdma 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:10.279 Found net devices under 0000:af:00.0: cvl_0_0 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:10.279 Found net devices under 0000:af:00.1: cvl_0_1 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:10.279 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:18:10.280 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:10.280 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:18:10.280 altname enp175s0f0np0 00:18:10.280 altname ens801f0np0 00:18:10.280 inet 192.168.100.8/24 scope global cvl_0_0 00:18:10.280 valid_lft forever preferred_lft forever 00:18:10.280 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:18:10.280 valid_lft forever preferred_lft forever 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:18:10.280 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:10.280 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:18:10.280 altname enp175s0f1np1 00:18:10.280 altname ens801f1np1 00:18:10.280 inet 192.168.100.9/24 scope global cvl_0_1 00:18:10.280 valid_lft forever preferred_lft forever 00:18:10.280 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:18:10.280 valid_lft forever preferred_lft forever 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:10.280 192.168.100.9' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:10.280 192.168.100.9' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:10.280 192.168.100.9' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:10.280 01:02:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:10.280 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:10.280 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:10.280 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.280 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.280 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=341012 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 341012 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 341012 ']' 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.281 [2024-11-19 01:02:16.111614] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:10.281 [2024-11-19 01:02:16.111714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.281 [2024-11-19 01:02:16.241436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.281 [2024-11-19 01:02:16.346043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.281 [2024-11-19 01:02:16.346091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.281 [2024-11-19 01:02:16.346101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.281 [2024-11-19 01:02:16.346110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.281 [2024-11-19 01:02:16.346118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.281 [2024-11-19 01:02:16.347441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.281 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.540 [2024-11-19 01:02:16.971730] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000289c0/0x617000007c40) succeed. 00:18:10.540 [2024-11-19 01:02:16.980999] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000028b40/0x617000007fc0) succeed. 00:18:10.540 [2024-11-19 01:02:16.981027] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:18:10.540 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.540 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:10.540 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.540 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.540 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.540 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:10.540 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.540 01:02:16 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.540 [2024-11-19 01:02:17.002750] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.540 NULL1 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.540 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:10.540 [2024-11-19 01:02:17.080154] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:10.540 [2024-11-19 01:02:17.080212] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341256 ] 00:18:10.799 Attached to nqn.2016-06.io.spdk:cnode1 00:18:10.799 Namespace ID: 1 size: 1GB 00:18:10.799 fused_ordering(0) 00:18:10.799 fused_ordering(1) 00:18:10.799 fused_ordering(2) 00:18:10.799 fused_ordering(3) 00:18:10.799 fused_ordering(4) 00:18:10.799 fused_ordering(5) 00:18:10.799 fused_ordering(6) 00:18:10.799 fused_ordering(7) 00:18:10.799 fused_ordering(8) 00:18:10.799 fused_ordering(9) 00:18:10.799 fused_ordering(10) 00:18:10.799 fused_ordering(11) 00:18:10.799 fused_ordering(12) 00:18:10.799 fused_ordering(13) 00:18:10.799 fused_ordering(14) 00:18:10.799 fused_ordering(15) 00:18:10.799 fused_ordering(16) 00:18:10.799 fused_ordering(17) 00:18:10.799 fused_ordering(18) 00:18:10.799 fused_ordering(19) 00:18:10.799 fused_ordering(20) 00:18:10.799 fused_ordering(21) 00:18:10.799 fused_ordering(22) 00:18:10.799 fused_ordering(23) 00:18:10.799 fused_ordering(24) 00:18:10.799 fused_ordering(25) 00:18:10.799 fused_ordering(26) 00:18:10.799 fused_ordering(27) 00:18:10.799 fused_ordering(28) 00:18:10.799 fused_ordering(29) 00:18:10.799 fused_ordering(30) 00:18:10.799 fused_ordering(31) 00:18:10.799 fused_ordering(32) 00:18:10.799 fused_ordering(33) 00:18:10.799 fused_ordering(34) 00:18:10.799 fused_ordering(35) 00:18:10.799 fused_ordering(36) 00:18:10.799 fused_ordering(37) 00:18:10.799 fused_ordering(38) 00:18:10.799 fused_ordering(39) 00:18:10.799 fused_ordering(40) 00:18:10.799 fused_ordering(41) 00:18:10.799 fused_ordering(42) 00:18:10.799 fused_ordering(43) 00:18:10.799 fused_ordering(44) 00:18:10.799 fused_ordering(45) 00:18:10.799 fused_ordering(46) 00:18:10.799 fused_ordering(47) 00:18:10.799 fused_ordering(48) 00:18:10.799 fused_ordering(49) 00:18:10.799 fused_ordering(50) 00:18:10.799 fused_ordering(51) 00:18:10.799 fused_ordering(52) 00:18:10.799 fused_ordering(53) 00:18:10.799 fused_ordering(54) 00:18:10.799 fused_ordering(55) 00:18:10.799 fused_ordering(56) 00:18:10.799 fused_ordering(57) 00:18:10.799 fused_ordering(58) 00:18:10.799 fused_ordering(59) 00:18:10.799 fused_ordering(60) 00:18:10.799 fused_ordering(61) 00:18:10.799 fused_ordering(62) 00:18:10.799 fused_ordering(63) 00:18:10.799 fused_ordering(64) 00:18:10.799 fused_ordering(65) 00:18:10.799 fused_ordering(66) 00:18:10.799 fused_ordering(67) 00:18:10.799 fused_ordering(68) 00:18:10.799 fused_ordering(69) 00:18:10.799 fused_ordering(70) 00:18:10.799 fused_ordering(71) 00:18:10.799 fused_ordering(72) 00:18:10.799 fused_ordering(73) 00:18:10.799 fused_ordering(74) 00:18:10.799 fused_ordering(75) 00:18:10.799 fused_ordering(76) 00:18:10.799 fused_ordering(77) 00:18:10.799 fused_ordering(78) 00:18:10.799 fused_ordering(79) 00:18:10.799 fused_ordering(80) 00:18:10.799 fused_ordering(81) 00:18:10.799 fused_ordering(82) 00:18:10.799 fused_ordering(83) 00:18:10.799 fused_ordering(84) 00:18:10.799 fused_ordering(85) 00:18:10.799 fused_ordering(86) 00:18:10.799 fused_ordering(87) 00:18:10.799 fused_ordering(88) 00:18:10.799 fused_ordering(89) 00:18:10.799 fused_ordering(90) 00:18:10.799 fused_ordering(91) 00:18:10.799 fused_ordering(92) 00:18:10.799 fused_ordering(93) 00:18:10.799 fused_ordering(94) 00:18:10.799 fused_ordering(95) 00:18:10.799 fused_ordering(96) 00:18:10.799 fused_ordering(97) 00:18:10.799 fused_ordering(98) 00:18:10.799 fused_ordering(99) 00:18:10.799 fused_ordering(100) 00:18:10.799 fused_ordering(101) 00:18:10.799 fused_ordering(102) 00:18:10.799 fused_ordering(103) 00:18:10.799 fused_ordering(104) 00:18:10.799 fused_ordering(105) 00:18:10.799 fused_ordering(106) 00:18:10.799 fused_ordering(107) 00:18:10.799 fused_ordering(108) 00:18:10.799 fused_ordering(109) 00:18:10.799 fused_ordering(110) 00:18:10.799 fused_ordering(111) 00:18:10.799 fused_ordering(112) 00:18:10.799 fused_ordering(113) 00:18:10.799 fused_ordering(114) 00:18:10.799 fused_ordering(115) 00:18:10.799 fused_ordering(116) 00:18:10.799 fused_ordering(117) 00:18:10.799 fused_ordering(118) 00:18:10.799 fused_ordering(119) 00:18:10.799 fused_ordering(120) 00:18:10.799 fused_ordering(121) 00:18:10.799 fused_ordering(122) 00:18:10.799 fused_ordering(123) 00:18:10.799 fused_ordering(124) 00:18:10.799 fused_ordering(125) 00:18:10.799 fused_ordering(126) 00:18:10.799 fused_ordering(127) 00:18:10.799 fused_ordering(128) 00:18:10.799 fused_ordering(129) 00:18:10.799 fused_ordering(130) 00:18:10.799 fused_ordering(131) 00:18:10.799 fused_ordering(132) 00:18:10.799 fused_ordering(133) 00:18:10.799 fused_ordering(134) 00:18:10.799 fused_ordering(135) 00:18:10.799 fused_ordering(136) 00:18:10.799 fused_ordering(137) 00:18:10.799 fused_ordering(138) 00:18:10.799 fused_ordering(139) 00:18:10.799 fused_ordering(140) 00:18:10.799 fused_ordering(141) 00:18:10.799 fused_ordering(142) 00:18:10.799 fused_ordering(143) 00:18:10.799 fused_ordering(144) 00:18:10.799 fused_ordering(145) 00:18:10.799 fused_ordering(146) 00:18:10.799 fused_ordering(147) 00:18:10.799 fused_ordering(148) 00:18:10.799 fused_ordering(149) 00:18:10.799 fused_ordering(150) 00:18:10.799 fused_ordering(151) 00:18:10.799 fused_ordering(152) 00:18:10.799 fused_ordering(153) 00:18:10.799 fused_ordering(154) 00:18:10.799 fused_ordering(155) 00:18:10.799 fused_ordering(156) 00:18:10.799 fused_ordering(157) 00:18:10.799 fused_ordering(158) 00:18:10.799 fused_ordering(159) 00:18:10.799 fused_ordering(160) 00:18:10.799 fused_ordering(161) 00:18:10.799 fused_ordering(162) 00:18:10.799 fused_ordering(163) 00:18:10.799 fused_ordering(164) 00:18:10.799 fused_ordering(165) 00:18:10.799 fused_ordering(166) 00:18:10.799 fused_ordering(167) 00:18:10.800 fused_ordering(168) 00:18:10.800 fused_ordering(169) 00:18:10.800 fused_ordering(170) 00:18:10.800 fused_ordering(171) 00:18:10.800 fused_ordering(172) 00:18:10.800 fused_ordering(173) 00:18:10.800 fused_ordering(174) 00:18:10.800 fused_ordering(175) 00:18:10.800 fused_ordering(176) 00:18:10.800 fused_ordering(177) 00:18:10.800 fused_ordering(178) 00:18:10.800 fused_ordering(179) 00:18:10.800 fused_ordering(180) 00:18:10.800 fused_ordering(181) 00:18:10.800 fused_ordering(182) 00:18:10.800 fused_ordering(183) 00:18:10.800 fused_ordering(184) 00:18:10.800 fused_ordering(185) 00:18:10.800 fused_ordering(186) 00:18:10.800 fused_ordering(187) 00:18:10.800 fused_ordering(188) 00:18:10.800 fused_ordering(189) 00:18:10.800 fused_ordering(190) 00:18:10.800 fused_ordering(191) 00:18:10.800 fused_ordering(192) 00:18:10.800 fused_ordering(193) 00:18:10.800 fused_ordering(194) 00:18:10.800 fused_ordering(195) 00:18:10.800 fused_ordering(196) 00:18:10.800 fused_ordering(197) 00:18:10.800 fused_ordering(198) 00:18:10.800 fused_ordering(199) 00:18:10.800 fused_ordering(200) 00:18:10.800 fused_ordering(201) 00:18:10.800 fused_ordering(202) 00:18:10.800 fused_ordering(203) 00:18:10.800 fused_ordering(204) 00:18:10.800 fused_ordering(205) 00:18:10.800 fused_ordering(206) 00:18:10.800 fused_ordering(207) 00:18:10.800 fused_ordering(208) 00:18:10.800 fused_ordering(209) 00:18:10.800 fused_ordering(210) 00:18:10.800 fused_ordering(211) 00:18:10.800 fused_ordering(212) 00:18:10.800 fused_ordering(213) 00:18:10.800 fused_ordering(214) 00:18:10.800 fused_ordering(215) 00:18:10.800 fused_ordering(216) 00:18:10.800 fused_ordering(217) 00:18:10.800 fused_ordering(218) 00:18:10.800 fused_ordering(219) 00:18:10.800 fused_ordering(220) 00:18:10.800 fused_ordering(221) 00:18:10.800 fused_ordering(222) 00:18:10.800 fused_ordering(223) 00:18:10.800 fused_ordering(224) 00:18:10.800 fused_ordering(225) 00:18:10.800 fused_ordering(226) 00:18:10.800 fused_ordering(227) 00:18:10.800 fused_ordering(228) 00:18:10.800 fused_ordering(229) 00:18:10.800 fused_ordering(230) 00:18:10.800 fused_ordering(231) 00:18:10.800 fused_ordering(232) 00:18:10.800 fused_ordering(233) 00:18:10.800 fused_ordering(234) 00:18:10.800 fused_ordering(235) 00:18:10.800 fused_ordering(236) 00:18:10.800 fused_ordering(237) 00:18:10.800 fused_ordering(238) 00:18:10.800 fused_ordering(239) 00:18:10.800 fused_ordering(240) 00:18:10.800 fused_ordering(241) 00:18:10.800 fused_ordering(242) 00:18:10.800 fused_ordering(243) 00:18:10.800 fused_ordering(244) 00:18:10.800 fused_ordering(245) 00:18:10.800 fused_ordering(246) 00:18:10.800 fused_ordering(247) 00:18:10.800 fused_ordering(248) 00:18:10.800 fused_ordering(249) 00:18:10.800 fused_ordering(250) 00:18:10.800 fused_ordering(251) 00:18:10.800 fused_ordering(252) 00:18:10.800 fused_ordering(253) 00:18:10.800 fused_ordering(254) 00:18:10.800 fused_ordering(255) 00:18:10.800 fused_ordering(256) 00:18:10.800 fused_ordering(257) 00:18:10.800 fused_ordering(258) 00:18:10.800 fused_ordering(259) 00:18:10.800 fused_ordering(260) 00:18:10.800 fused_ordering(261) 00:18:10.800 fused_ordering(262) 00:18:10.800 fused_ordering(263) 00:18:10.800 fused_ordering(264) 00:18:10.800 fused_ordering(265) 00:18:10.800 fused_ordering(266) 00:18:10.800 fused_ordering(267) 00:18:10.800 fused_ordering(268) 00:18:10.800 fused_ordering(269) 00:18:10.800 fused_ordering(270) 00:18:10.800 fused_ordering(271) 00:18:10.800 fused_ordering(272) 00:18:10.800 fused_ordering(273) 00:18:10.800 fused_ordering(274) 00:18:10.800 fused_ordering(275) 00:18:10.800 fused_ordering(276) 00:18:10.800 fused_ordering(277) 00:18:10.800 fused_ordering(278) 00:18:10.800 fused_ordering(279) 00:18:10.800 fused_ordering(280) 00:18:10.800 fused_ordering(281) 00:18:10.800 fused_ordering(282) 00:18:10.800 fused_ordering(283) 00:18:10.800 fused_ordering(284) 00:18:10.800 fused_ordering(285) 00:18:10.800 fused_ordering(286) 00:18:10.800 fused_ordering(287) 00:18:10.800 fused_ordering(288) 00:18:10.800 fused_ordering(289) 00:18:10.800 fused_ordering(290) 00:18:10.800 fused_ordering(291) 00:18:10.800 fused_ordering(292) 00:18:10.800 fused_ordering(293) 00:18:10.800 fused_ordering(294) 00:18:10.800 fused_ordering(295) 00:18:10.800 fused_ordering(296) 00:18:10.800 fused_ordering(297) 00:18:10.800 fused_ordering(298) 00:18:10.800 fused_ordering(299) 00:18:10.800 fused_ordering(300) 00:18:10.800 fused_ordering(301) 00:18:10.800 fused_ordering(302) 00:18:10.800 fused_ordering(303) 00:18:10.800 fused_ordering(304) 00:18:10.800 fused_ordering(305) 00:18:10.800 fused_ordering(306) 00:18:10.800 fused_ordering(307) 00:18:10.800 fused_ordering(308) 00:18:10.800 fused_ordering(309) 00:18:10.800 fused_ordering(310) 00:18:10.800 fused_ordering(311) 00:18:10.800 fused_ordering(312) 00:18:10.800 fused_ordering(313) 00:18:10.800 fused_ordering(314) 00:18:10.800 fused_ordering(315) 00:18:10.800 fused_ordering(316) 00:18:10.800 fused_ordering(317) 00:18:10.800 fused_ordering(318) 00:18:10.800 fused_ordering(319) 00:18:10.800 fused_ordering(320) 00:18:10.800 fused_ordering(321) 00:18:10.800 fused_ordering(322) 00:18:10.800 fused_ordering(323) 00:18:10.800 fused_ordering(324) 00:18:10.800 fused_ordering(325) 00:18:10.800 fused_ordering(326) 00:18:10.800 fused_ordering(327) 00:18:10.800 fused_ordering(328) 00:18:10.800 fused_ordering(329) 00:18:10.800 fused_ordering(330) 00:18:10.800 fused_ordering(331) 00:18:10.800 fused_ordering(332) 00:18:10.800 fused_ordering(333) 00:18:10.800 fused_ordering(334) 00:18:10.800 fused_ordering(335) 00:18:10.800 fused_ordering(336) 00:18:10.800 fused_ordering(337) 00:18:10.800 fused_ordering(338) 00:18:10.800 fused_ordering(339) 00:18:10.800 fused_ordering(340) 00:18:10.800 fused_ordering(341) 00:18:10.800 fused_ordering(342) 00:18:10.800 fused_ordering(343) 00:18:10.800 fused_ordering(344) 00:18:10.800 fused_ordering(345) 00:18:10.800 fused_ordering(346) 00:18:10.800 fused_ordering(347) 00:18:10.800 fused_ordering(348) 00:18:10.800 fused_ordering(349) 00:18:10.800 fused_ordering(350) 00:18:10.800 fused_ordering(351) 00:18:10.800 fused_ordering(352) 00:18:10.800 fused_ordering(353) 00:18:10.800 fused_ordering(354) 00:18:10.800 fused_ordering(355) 00:18:10.800 fused_ordering(356) 00:18:10.800 fused_ordering(357) 00:18:10.800 fused_ordering(358) 00:18:10.800 fused_ordering(359) 00:18:10.800 fused_ordering(360) 00:18:10.800 fused_ordering(361) 00:18:10.800 fused_ordering(362) 00:18:10.800 fused_ordering(363) 00:18:10.800 fused_ordering(364) 00:18:10.800 fused_ordering(365) 00:18:10.800 fused_ordering(366) 00:18:10.800 fused_ordering(367) 00:18:10.800 fused_ordering(368) 00:18:10.800 fused_ordering(369) 00:18:10.800 fused_ordering(370) 00:18:10.800 fused_ordering(371) 00:18:10.800 fused_ordering(372) 00:18:10.800 fused_ordering(373) 00:18:10.800 fused_ordering(374) 00:18:10.800 fused_ordering(375) 00:18:10.800 fused_ordering(376) 00:18:10.800 fused_ordering(377) 00:18:10.800 fused_ordering(378) 00:18:10.800 fused_ordering(379) 00:18:10.800 fused_ordering(380) 00:18:10.800 fused_ordering(381) 00:18:10.800 fused_ordering(382) 00:18:10.800 fused_ordering(383) 00:18:10.800 fused_ordering(384) 00:18:10.800 fused_ordering(385) 00:18:10.800 fused_ordering(386) 00:18:10.800 fused_ordering(387) 00:18:10.800 fused_ordering(388) 00:18:10.800 fused_ordering(389) 00:18:10.800 fused_ordering(390) 00:18:10.800 fused_ordering(391) 00:18:10.800 fused_ordering(392) 00:18:10.800 fused_ordering(393) 00:18:10.800 fused_ordering(394) 00:18:10.800 fused_ordering(395) 00:18:10.800 fused_ordering(396) 00:18:10.800 fused_ordering(397) 00:18:10.800 fused_ordering(398) 00:18:10.800 fused_ordering(399) 00:18:10.800 fused_ordering(400) 00:18:10.800 fused_ordering(401) 00:18:10.800 fused_ordering(402) 00:18:10.800 fused_ordering(403) 00:18:10.800 fused_ordering(404) 00:18:10.800 fused_ordering(405) 00:18:10.800 fused_ordering(406) 00:18:10.800 fused_ordering(407) 00:18:10.800 fused_ordering(408) 00:18:10.800 fused_ordering(409) 00:18:10.800 fused_ordering(410) 00:18:11.060 fused_ordering(411) 00:18:11.060 fused_ordering(412) 00:18:11.060 fused_ordering(413) 00:18:11.060 fused_ordering(414) 00:18:11.060 fused_ordering(415) 00:18:11.060 fused_ordering(416) 00:18:11.060 fused_ordering(417) 00:18:11.060 fused_ordering(418) 00:18:11.060 fused_ordering(419) 00:18:11.060 fused_ordering(420) 00:18:11.060 fused_ordering(421) 00:18:11.060 fused_ordering(422) 00:18:11.060 fused_ordering(423) 00:18:11.060 fused_ordering(424) 00:18:11.060 fused_ordering(425) 00:18:11.060 fused_ordering(426) 00:18:11.060 fused_ordering(427) 00:18:11.060 fused_ordering(428) 00:18:11.060 fused_ordering(429) 00:18:11.060 fused_ordering(430) 00:18:11.060 fused_ordering(431) 00:18:11.060 fused_ordering(432) 00:18:11.060 fused_ordering(433) 00:18:11.060 fused_ordering(434) 00:18:11.060 fused_ordering(435) 00:18:11.060 fused_ordering(436) 00:18:11.060 fused_ordering(437) 00:18:11.060 fused_ordering(438) 00:18:11.060 fused_ordering(439) 00:18:11.060 fused_ordering(440) 00:18:11.060 fused_ordering(441) 00:18:11.060 fused_ordering(442) 00:18:11.060 fused_ordering(443) 00:18:11.060 fused_ordering(444) 00:18:11.060 fused_ordering(445) 00:18:11.060 fused_ordering(446) 00:18:11.060 fused_ordering(447) 00:18:11.060 fused_ordering(448) 00:18:11.060 fused_ordering(449) 00:18:11.060 fused_ordering(450) 00:18:11.060 fused_ordering(451) 00:18:11.060 fused_ordering(452) 00:18:11.060 fused_ordering(453) 00:18:11.060 fused_ordering(454) 00:18:11.060 fused_ordering(455) 00:18:11.060 fused_ordering(456) 00:18:11.060 fused_ordering(457) 00:18:11.060 fused_ordering(458) 00:18:11.060 fused_ordering(459) 00:18:11.060 fused_ordering(460) 00:18:11.060 fused_ordering(461) 00:18:11.060 fused_ordering(462) 00:18:11.060 fused_ordering(463) 00:18:11.060 fused_ordering(464) 00:18:11.060 fused_ordering(465) 00:18:11.060 fused_ordering(466) 00:18:11.060 fused_ordering(467) 00:18:11.060 fused_ordering(468) 00:18:11.060 fused_ordering(469) 00:18:11.060 fused_ordering(470) 00:18:11.060 fused_ordering(471) 00:18:11.060 fused_ordering(472) 00:18:11.060 fused_ordering(473) 00:18:11.060 fused_ordering(474) 00:18:11.060 fused_ordering(475) 00:18:11.060 fused_ordering(476) 00:18:11.060 fused_ordering(477) 00:18:11.060 fused_ordering(478) 00:18:11.060 fused_ordering(479) 00:18:11.060 fused_ordering(480) 00:18:11.060 fused_ordering(481) 00:18:11.060 fused_ordering(482) 00:18:11.060 fused_ordering(483) 00:18:11.060 fused_ordering(484) 00:18:11.060 fused_ordering(485) 00:18:11.060 fused_ordering(486) 00:18:11.060 fused_ordering(487) 00:18:11.060 fused_ordering(488) 00:18:11.060 fused_ordering(489) 00:18:11.060 fused_ordering(490) 00:18:11.060 fused_ordering(491) 00:18:11.060 fused_ordering(492) 00:18:11.060 fused_ordering(493) 00:18:11.060 fused_ordering(494) 00:18:11.060 fused_ordering(495) 00:18:11.060 fused_ordering(496) 00:18:11.060 fused_ordering(497) 00:18:11.060 fused_ordering(498) 00:18:11.060 fused_ordering(499) 00:18:11.060 fused_ordering(500) 00:18:11.060 fused_ordering(501) 00:18:11.060 fused_ordering(502) 00:18:11.060 fused_ordering(503) 00:18:11.060 fused_ordering(504) 00:18:11.060 fused_ordering(505) 00:18:11.060 fused_ordering(506) 00:18:11.060 fused_ordering(507) 00:18:11.060 fused_ordering(508) 00:18:11.060 fused_ordering(509) 00:18:11.060 fused_ordering(510) 00:18:11.060 fused_ordering(511) 00:18:11.060 fused_ordering(512) 00:18:11.060 fused_ordering(513) 00:18:11.060 fused_ordering(514) 00:18:11.060 fused_ordering(515) 00:18:11.060 fused_ordering(516) 00:18:11.060 fused_ordering(517) 00:18:11.060 fused_ordering(518) 00:18:11.060 fused_ordering(519) 00:18:11.060 fused_ordering(520) 00:18:11.060 fused_ordering(521) 00:18:11.060 fused_ordering(522) 00:18:11.060 fused_ordering(523) 00:18:11.060 fused_ordering(524) 00:18:11.060 fused_ordering(525) 00:18:11.060 fused_ordering(526) 00:18:11.060 fused_ordering(527) 00:18:11.060 fused_ordering(528) 00:18:11.060 fused_ordering(529) 00:18:11.060 fused_ordering(530) 00:18:11.060 fused_ordering(531) 00:18:11.060 fused_ordering(532) 00:18:11.060 fused_ordering(533) 00:18:11.060 fused_ordering(534) 00:18:11.060 fused_ordering(535) 00:18:11.060 fused_ordering(536) 00:18:11.060 fused_ordering(537) 00:18:11.060 fused_ordering(538) 00:18:11.060 fused_ordering(539) 00:18:11.060 fused_ordering(540) 00:18:11.060 fused_ordering(541) 00:18:11.060 fused_ordering(542) 00:18:11.060 fused_ordering(543) 00:18:11.060 fused_ordering(544) 00:18:11.060 fused_ordering(545) 00:18:11.060 fused_ordering(546) 00:18:11.060 fused_ordering(547) 00:18:11.060 fused_ordering(548) 00:18:11.060 fused_ordering(549) 00:18:11.060 fused_ordering(550) 00:18:11.060 fused_ordering(551) 00:18:11.060 fused_ordering(552) 00:18:11.060 fused_ordering(553) 00:18:11.060 fused_ordering(554) 00:18:11.060 fused_ordering(555) 00:18:11.060 fused_ordering(556) 00:18:11.060 fused_ordering(557) 00:18:11.060 fused_ordering(558) 00:18:11.060 fused_ordering(559) 00:18:11.060 fused_ordering(560) 00:18:11.060 fused_ordering(561) 00:18:11.060 fused_ordering(562) 00:18:11.060 fused_ordering(563) 00:18:11.060 fused_ordering(564) 00:18:11.060 fused_ordering(565) 00:18:11.060 fused_ordering(566) 00:18:11.060 fused_ordering(567) 00:18:11.060 fused_ordering(568) 00:18:11.060 fused_ordering(569) 00:18:11.060 fused_ordering(570) 00:18:11.060 fused_ordering(571) 00:18:11.060 fused_ordering(572) 00:18:11.060 fused_ordering(573) 00:18:11.060 fused_ordering(574) 00:18:11.060 fused_ordering(575) 00:18:11.060 fused_ordering(576) 00:18:11.060 fused_ordering(577) 00:18:11.060 fused_ordering(578) 00:18:11.060 fused_ordering(579) 00:18:11.060 fused_ordering(580) 00:18:11.060 fused_ordering(581) 00:18:11.060 fused_ordering(582) 00:18:11.060 fused_ordering(583) 00:18:11.060 fused_ordering(584) 00:18:11.060 fused_ordering(585) 00:18:11.060 fused_ordering(586) 00:18:11.060 fused_ordering(587) 00:18:11.060 fused_ordering(588) 00:18:11.060 fused_ordering(589) 00:18:11.060 fused_ordering(590) 00:18:11.060 fused_ordering(591) 00:18:11.060 fused_ordering(592) 00:18:11.060 fused_ordering(593) 00:18:11.060 fused_ordering(594) 00:18:11.060 fused_ordering(595) 00:18:11.060 fused_ordering(596) 00:18:11.060 fused_ordering(597) 00:18:11.060 fused_ordering(598) 00:18:11.060 fused_ordering(599) 00:18:11.060 fused_ordering(600) 00:18:11.060 fused_ordering(601) 00:18:11.060 fused_ordering(602) 00:18:11.060 fused_ordering(603) 00:18:11.060 fused_ordering(604) 00:18:11.060 fused_ordering(605) 00:18:11.060 fused_ordering(606) 00:18:11.060 fused_ordering(607) 00:18:11.060 fused_ordering(608) 00:18:11.060 fused_ordering(609) 00:18:11.060 fused_ordering(610) 00:18:11.060 fused_ordering(611) 00:18:11.060 fused_ordering(612) 00:18:11.060 fused_ordering(613) 00:18:11.060 fused_ordering(614) 00:18:11.060 fused_ordering(615) 00:18:11.060 fused_ordering(616) 00:18:11.060 fused_ordering(617) 00:18:11.060 fused_ordering(618) 00:18:11.060 fused_ordering(619) 00:18:11.060 fused_ordering(620) 00:18:11.060 fused_ordering(621) 00:18:11.060 fused_ordering(622) 00:18:11.060 fused_ordering(623) 00:18:11.060 fused_ordering(624) 00:18:11.060 fused_ordering(625) 00:18:11.060 fused_ordering(626) 00:18:11.061 fused_ordering(627) 00:18:11.061 fused_ordering(628) 00:18:11.061 fused_ordering(629) 00:18:11.061 fused_ordering(630) 00:18:11.061 fused_ordering(631) 00:18:11.061 fused_ordering(632) 00:18:11.061 fused_ordering(633) 00:18:11.061 fused_ordering(634) 00:18:11.061 fused_ordering(635) 00:18:11.061 fused_ordering(636) 00:18:11.061 fused_ordering(637) 00:18:11.061 fused_ordering(638) 00:18:11.061 fused_ordering(639) 00:18:11.061 fused_ordering(640) 00:18:11.061 fused_ordering(641) 00:18:11.061 fused_ordering(642) 00:18:11.061 fused_ordering(643) 00:18:11.061 fused_ordering(644) 00:18:11.061 fused_ordering(645) 00:18:11.061 fused_ordering(646) 00:18:11.061 fused_ordering(647) 00:18:11.061 fused_ordering(648) 00:18:11.061 fused_ordering(649) 00:18:11.061 fused_ordering(650) 00:18:11.061 fused_ordering(651) 00:18:11.061 fused_ordering(652) 00:18:11.061 fused_ordering(653) 00:18:11.061 fused_ordering(654) 00:18:11.061 fused_ordering(655) 00:18:11.061 fused_ordering(656) 00:18:11.061 fused_ordering(657) 00:18:11.061 fused_ordering(658) 00:18:11.061 fused_ordering(659) 00:18:11.061 fused_ordering(660) 00:18:11.061 fused_ordering(661) 00:18:11.061 fused_ordering(662) 00:18:11.061 fused_ordering(663) 00:18:11.061 fused_ordering(664) 00:18:11.061 fused_ordering(665) 00:18:11.061 fused_ordering(666) 00:18:11.061 fused_ordering(667) 00:18:11.061 fused_ordering(668) 00:18:11.061 fused_ordering(669) 00:18:11.061 fused_ordering(670) 00:18:11.061 fused_ordering(671) 00:18:11.061 fused_ordering(672) 00:18:11.061 fused_ordering(673) 00:18:11.061 fused_ordering(674) 00:18:11.061 fused_ordering(675) 00:18:11.061 fused_ordering(676) 00:18:11.061 fused_ordering(677) 00:18:11.061 fused_ordering(678) 00:18:11.061 fused_ordering(679) 00:18:11.061 fused_ordering(680) 00:18:11.061 fused_ordering(681) 00:18:11.061 fused_ordering(682) 00:18:11.061 fused_ordering(683) 00:18:11.061 fused_ordering(684) 00:18:11.061 fused_ordering(685) 00:18:11.061 fused_ordering(686) 00:18:11.061 fused_ordering(687) 00:18:11.061 fused_ordering(688) 00:18:11.061 fused_ordering(689) 00:18:11.061 fused_ordering(690) 00:18:11.061 fused_ordering(691) 00:18:11.061 fused_ordering(692) 00:18:11.061 fused_ordering(693) 00:18:11.061 fused_ordering(694) 00:18:11.061 fused_ordering(695) 00:18:11.061 fused_ordering(696) 00:18:11.061 fused_ordering(697) 00:18:11.061 fused_ordering(698) 00:18:11.061 fused_ordering(699) 00:18:11.061 fused_ordering(700) 00:18:11.061 fused_ordering(701) 00:18:11.061 fused_ordering(702) 00:18:11.061 fused_ordering(703) 00:18:11.061 fused_ordering(704) 00:18:11.061 fused_ordering(705) 00:18:11.061 fused_ordering(706) 00:18:11.061 fused_ordering(707) 00:18:11.061 fused_ordering(708) 00:18:11.061 fused_ordering(709) 00:18:11.061 fused_ordering(710) 00:18:11.061 fused_ordering(711) 00:18:11.061 fused_ordering(712) 00:18:11.061 fused_ordering(713) 00:18:11.061 fused_ordering(714) 00:18:11.061 fused_ordering(715) 00:18:11.061 fused_ordering(716) 00:18:11.061 fused_ordering(717) 00:18:11.061 fused_ordering(718) 00:18:11.061 fused_ordering(719) 00:18:11.061 fused_ordering(720) 00:18:11.061 fused_ordering(721) 00:18:11.061 fused_ordering(722) 00:18:11.061 fused_ordering(723) 00:18:11.061 fused_ordering(724) 00:18:11.061 fused_ordering(725) 00:18:11.061 fused_ordering(726) 00:18:11.061 fused_ordering(727) 00:18:11.061 fused_ordering(728) 00:18:11.061 fused_ordering(729) 00:18:11.061 fused_ordering(730) 00:18:11.061 fused_ordering(731) 00:18:11.061 fused_ordering(732) 00:18:11.061 fused_ordering(733) 00:18:11.061 fused_ordering(734) 00:18:11.061 fused_ordering(735) 00:18:11.061 fused_ordering(736) 00:18:11.061 fused_ordering(737) 00:18:11.061 fused_ordering(738) 00:18:11.061 fused_ordering(739) 00:18:11.061 fused_ordering(740) 00:18:11.061 fused_ordering(741) 00:18:11.061 fused_ordering(742) 00:18:11.061 fused_ordering(743) 00:18:11.061 fused_ordering(744) 00:18:11.061 fused_ordering(745) 00:18:11.061 fused_ordering(746) 00:18:11.061 fused_ordering(747) 00:18:11.061 fused_ordering(748) 00:18:11.061 fused_ordering(749) 00:18:11.061 fused_ordering(750) 00:18:11.061 fused_ordering(751) 00:18:11.061 fused_ordering(752) 00:18:11.061 fused_ordering(753) 00:18:11.061 fused_ordering(754) 00:18:11.061 fused_ordering(755) 00:18:11.061 fused_ordering(756) 00:18:11.061 fused_ordering(757) 00:18:11.061 fused_ordering(758) 00:18:11.061 fused_ordering(759) 00:18:11.061 fused_ordering(760) 00:18:11.061 fused_ordering(761) 00:18:11.061 fused_ordering(762) 00:18:11.061 fused_ordering(763) 00:18:11.061 fused_ordering(764) 00:18:11.061 fused_ordering(765) 00:18:11.061 fused_ordering(766) 00:18:11.061 fused_ordering(767) 00:18:11.061 fused_ordering(768) 00:18:11.061 fused_ordering(769) 00:18:11.061 fused_ordering(770) 00:18:11.061 fused_ordering(771) 00:18:11.061 fused_ordering(772) 00:18:11.061 fused_ordering(773) 00:18:11.061 fused_ordering(774) 00:18:11.061 fused_ordering(775) 00:18:11.061 fused_ordering(776) 00:18:11.061 fused_ordering(777) 00:18:11.061 fused_ordering(778) 00:18:11.061 fused_ordering(779) 00:18:11.061 fused_ordering(780) 00:18:11.061 fused_ordering(781) 00:18:11.061 fused_ordering(782) 00:18:11.061 fused_ordering(783) 00:18:11.061 fused_ordering(784) 00:18:11.061 fused_ordering(785) 00:18:11.061 fused_ordering(786) 00:18:11.061 fused_ordering(787) 00:18:11.061 fused_ordering(788) 00:18:11.061 fused_ordering(789) 00:18:11.061 fused_ordering(790) 00:18:11.061 fused_ordering(791) 00:18:11.061 fused_ordering(792) 00:18:11.061 fused_ordering(793) 00:18:11.061 fused_ordering(794) 00:18:11.061 fused_ordering(795) 00:18:11.061 fused_ordering(796) 00:18:11.061 fused_ordering(797) 00:18:11.061 fused_ordering(798) 00:18:11.061 fused_ordering(799) 00:18:11.061 fused_ordering(800) 00:18:11.061 fused_ordering(801) 00:18:11.061 fused_ordering(802) 00:18:11.061 fused_ordering(803) 00:18:11.061 fused_ordering(804) 00:18:11.061 fused_ordering(805) 00:18:11.061 fused_ordering(806) 00:18:11.061 fused_ordering(807) 00:18:11.061 fused_ordering(808) 00:18:11.061 fused_ordering(809) 00:18:11.061 fused_ordering(810) 00:18:11.061 fused_ordering(811) 00:18:11.061 fused_ordering(812) 00:18:11.061 fused_ordering(813) 00:18:11.061 fused_ordering(814) 00:18:11.061 fused_ordering(815) 00:18:11.061 fused_ordering(816) 00:18:11.061 fused_ordering(817) 00:18:11.061 fused_ordering(818) 00:18:11.061 fused_ordering(819) 00:18:11.061 fused_ordering(820) 00:18:11.320 fused_ordering(821) 00:18:11.320 fused_ordering(822) 00:18:11.320 fused_ordering(823) 00:18:11.320 fused_ordering(824) 00:18:11.321 fused_ordering(825) 00:18:11.321 fused_ordering(826) 00:18:11.321 fused_ordering(827) 00:18:11.321 fused_ordering(828) 00:18:11.321 fused_ordering(829) 00:18:11.321 fused_ordering(830) 00:18:11.321 fused_ordering(831) 00:18:11.321 fused_ordering(832) 00:18:11.321 fused_ordering(833) 00:18:11.321 fused_ordering(834) 00:18:11.321 fused_ordering(835) 00:18:11.321 fused_ordering(836) 00:18:11.321 fused_ordering(837) 00:18:11.321 fused_ordering(838) 00:18:11.321 fused_ordering(839) 00:18:11.321 fused_ordering(840) 00:18:11.321 fused_ordering(841) 00:18:11.321 fused_ordering(842) 00:18:11.321 fused_ordering(843) 00:18:11.321 fused_ordering(844) 00:18:11.321 fused_ordering(845) 00:18:11.321 fused_ordering(846) 00:18:11.321 fused_ordering(847) 00:18:11.321 fused_ordering(848) 00:18:11.321 fused_ordering(849) 00:18:11.321 fused_ordering(850) 00:18:11.321 fused_ordering(851) 00:18:11.321 fused_ordering(852) 00:18:11.321 fused_ordering(853) 00:18:11.321 fused_ordering(854) 00:18:11.321 fused_ordering(855) 00:18:11.321 fused_ordering(856) 00:18:11.321 fused_ordering(857) 00:18:11.321 fused_ordering(858) 00:18:11.321 fused_ordering(859) 00:18:11.321 fused_ordering(860) 00:18:11.321 fused_ordering(861) 00:18:11.321 fused_ordering(862) 00:18:11.321 fused_ordering(863) 00:18:11.321 fused_ordering(864) 00:18:11.321 fused_ordering(865) 00:18:11.321 fused_ordering(866) 00:18:11.321 fused_ordering(867) 00:18:11.321 fused_ordering(868) 00:18:11.321 fused_ordering(869) 00:18:11.321 fused_ordering(870) 00:18:11.321 fused_ordering(871) 00:18:11.321 fused_ordering(872) 00:18:11.321 fused_ordering(873) 00:18:11.321 fused_ordering(874) 00:18:11.321 fused_ordering(875) 00:18:11.321 fused_ordering(876) 00:18:11.321 fused_ordering(877) 00:18:11.321 fused_ordering(878) 00:18:11.321 fused_ordering(879) 00:18:11.321 fused_ordering(880) 00:18:11.321 fused_ordering(881) 00:18:11.321 fused_ordering(882) 00:18:11.321 fused_ordering(883) 00:18:11.321 fused_ordering(884) 00:18:11.321 fused_ordering(885) 00:18:11.321 fused_ordering(886) 00:18:11.321 fused_ordering(887) 00:18:11.321 fused_ordering(888) 00:18:11.321 fused_ordering(889) 00:18:11.321 fused_ordering(890) 00:18:11.321 fused_ordering(891) 00:18:11.321 fused_ordering(892) 00:18:11.321 fused_ordering(893) 00:18:11.321 fused_ordering(894) 00:18:11.321 fused_ordering(895) 00:18:11.321 fused_ordering(896) 00:18:11.321 fused_ordering(897) 00:18:11.321 fused_ordering(898) 00:18:11.321 fused_ordering(899) 00:18:11.321 fused_ordering(900) 00:18:11.321 fused_ordering(901) 00:18:11.321 fused_ordering(902) 00:18:11.321 fused_ordering(903) 00:18:11.321 fused_ordering(904) 00:18:11.321 fused_ordering(905) 00:18:11.321 fused_ordering(906) 00:18:11.321 fused_ordering(907) 00:18:11.321 fused_ordering(908) 00:18:11.321 fused_ordering(909) 00:18:11.321 fused_ordering(910) 00:18:11.321 fused_ordering(911) 00:18:11.321 fused_ordering(912) 00:18:11.321 fused_ordering(913) 00:18:11.321 fused_ordering(914) 00:18:11.321 fused_ordering(915) 00:18:11.321 fused_ordering(916) 00:18:11.321 fused_ordering(917) 00:18:11.321 fused_ordering(918) 00:18:11.321 fused_ordering(919) 00:18:11.321 fused_ordering(920) 00:18:11.321 fused_ordering(921) 00:18:11.321 fused_ordering(922) 00:18:11.321 fused_ordering(923) 00:18:11.321 fused_ordering(924) 00:18:11.321 fused_ordering(925) 00:18:11.321 fused_ordering(926) 00:18:11.321 fused_ordering(927) 00:18:11.321 fused_ordering(928) 00:18:11.321 fused_ordering(929) 00:18:11.321 fused_ordering(930) 00:18:11.321 fused_ordering(931) 00:18:11.321 fused_ordering(932) 00:18:11.321 fused_ordering(933) 00:18:11.321 fused_ordering(934) 00:18:11.321 fused_ordering(935) 00:18:11.321 fused_ordering(936) 00:18:11.321 fused_ordering(937) 00:18:11.321 fused_ordering(938) 00:18:11.321 fused_ordering(939) 00:18:11.321 fused_ordering(940) 00:18:11.321 fused_ordering(941) 00:18:11.321 fused_ordering(942) 00:18:11.321 fused_ordering(943) 00:18:11.321 fused_ordering(944) 00:18:11.321 fused_ordering(945) 00:18:11.321 fused_ordering(946) 00:18:11.321 fused_ordering(947) 00:18:11.321 fused_ordering(948) 00:18:11.321 fused_ordering(949) 00:18:11.321 fused_ordering(950) 00:18:11.321 fused_ordering(951) 00:18:11.321 fused_ordering(952) 00:18:11.321 fused_ordering(953) 00:18:11.321 fused_ordering(954) 00:18:11.321 fused_ordering(955) 00:18:11.321 fused_ordering(956) 00:18:11.321 fused_ordering(957) 00:18:11.321 fused_ordering(958) 00:18:11.321 fused_ordering(959) 00:18:11.321 fused_ordering(960) 00:18:11.321 fused_ordering(961) 00:18:11.321 fused_ordering(962) 00:18:11.321 fused_ordering(963) 00:18:11.321 fused_ordering(964) 00:18:11.321 fused_ordering(965) 00:18:11.321 fused_ordering(966) 00:18:11.321 fused_ordering(967) 00:18:11.321 fused_ordering(968) 00:18:11.321 fused_ordering(969) 00:18:11.321 fused_ordering(970) 00:18:11.321 fused_ordering(971) 00:18:11.321 fused_ordering(972) 00:18:11.321 fused_ordering(973) 00:18:11.321 fused_ordering(974) 00:18:11.321 fused_ordering(975) 00:18:11.321 fused_ordering(976) 00:18:11.321 fused_ordering(977) 00:18:11.321 fused_ordering(978) 00:18:11.321 fused_ordering(979) 00:18:11.321 fused_ordering(980) 00:18:11.321 fused_ordering(981) 00:18:11.321 fused_ordering(982) 00:18:11.321 fused_ordering(983) 00:18:11.321 fused_ordering(984) 00:18:11.321 fused_ordering(985) 00:18:11.321 fused_ordering(986) 00:18:11.321 fused_ordering(987) 00:18:11.321 fused_ordering(988) 00:18:11.321 fused_ordering(989) 00:18:11.321 fused_ordering(990) 00:18:11.321 fused_ordering(991) 00:18:11.321 fused_ordering(992) 00:18:11.321 fused_ordering(993) 00:18:11.321 fused_ordering(994) 00:18:11.321 fused_ordering(995) 00:18:11.321 fused_ordering(996) 00:18:11.321 fused_ordering(997) 00:18:11.321 fused_ordering(998) 00:18:11.321 fused_ordering(999) 00:18:11.321 fused_ordering(1000) 00:18:11.321 fused_ordering(1001) 00:18:11.321 fused_ordering(1002) 00:18:11.321 fused_ordering(1003) 00:18:11.321 fused_ordering(1004) 00:18:11.321 fused_ordering(1005) 00:18:11.321 fused_ordering(1006) 00:18:11.321 fused_ordering(1007) 00:18:11.321 fused_ordering(1008) 00:18:11.321 fused_ordering(1009) 00:18:11.321 fused_ordering(1010) 00:18:11.321 fused_ordering(1011) 00:18:11.321 fused_ordering(1012) 00:18:11.321 fused_ordering(1013) 00:18:11.321 fused_ordering(1014) 00:18:11.321 fused_ordering(1015) 00:18:11.321 fused_ordering(1016) 00:18:11.321 fused_ordering(1017) 00:18:11.321 fused_ordering(1018) 00:18:11.321 fused_ordering(1019) 00:18:11.321 fused_ordering(1020) 00:18:11.321 fused_ordering(1021) 00:18:11.321 fused_ordering(1022) 00:18:11.321 fused_ordering(1023) 00:18:11.321 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:11.321 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:11.321 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:11.321 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:11.321 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:11.321 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:11.321 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:11.321 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:11.321 01:02:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:11.321 rmmod nvme_rdma 00:18:11.321 rmmod nvme_fabrics 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 341012 ']' 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 341012 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 341012 ']' 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 341012 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 341012 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 341012' 00:18:11.580 killing process with pid 341012 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 341012 00:18:11.580 01:02:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 341012 00:18:12.516 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:12.516 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:12.516 00:18:12.516 real 0m9.156s 00:18:12.516 user 0m5.804s 00:18:12.516 sys 0m4.931s 00:18:12.517 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.517 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:12.517 ************************************ 00:18:12.517 END TEST nvmf_fused_ordering 00:18:12.517 ************************************ 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:12.776 ************************************ 00:18:12.776 START TEST nvmf_ns_masking 00:18:12.776 ************************************ 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:18:12.776 * Looking for test storage... 00:18:12.776 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:12.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.776 --rc genhtml_branch_coverage=1 00:18:12.776 --rc genhtml_function_coverage=1 00:18:12.776 --rc genhtml_legend=1 00:18:12.776 --rc geninfo_all_blocks=1 00:18:12.776 --rc geninfo_unexecuted_blocks=1 00:18:12.776 00:18:12.776 ' 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:12.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.776 --rc genhtml_branch_coverage=1 00:18:12.776 --rc genhtml_function_coverage=1 00:18:12.776 --rc genhtml_legend=1 00:18:12.776 --rc geninfo_all_blocks=1 00:18:12.776 --rc geninfo_unexecuted_blocks=1 00:18:12.776 00:18:12.776 ' 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:12.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.776 --rc genhtml_branch_coverage=1 00:18:12.776 --rc genhtml_function_coverage=1 00:18:12.776 --rc genhtml_legend=1 00:18:12.776 --rc geninfo_all_blocks=1 00:18:12.776 --rc geninfo_unexecuted_blocks=1 00:18:12.776 00:18:12.776 ' 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:12.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.776 --rc genhtml_branch_coverage=1 00:18:12.776 --rc genhtml_function_coverage=1 00:18:12.776 --rc genhtml_legend=1 00:18:12.776 --rc geninfo_all_blocks=1 00:18:12.776 --rc geninfo_unexecuted_blocks=1 00:18:12.776 00:18:12.776 ' 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.776 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:12.777 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:12.777 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c2b15450-f7ee-4a1a-a711-612be530d92d 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=cfa43174-3ba5-47d3-9431-54021a605aa3 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6265531a-e34c-4487-b3ed-36f1d950ceda 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:13.036 01:02:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:19.610 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:19.610 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:19.610 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@405 -- # modinfo irdma 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:19.611 Found net devices under 0000:af:00.0: cvl_0_0 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:19.611 Found net devices under 0000:af:00.1: cvl_0_1 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:18:19.611 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:19.611 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:18:19.611 altname enp175s0f0np0 00:18:19.611 altname ens801f0np0 00:18:19.611 inet 192.168.100.8/24 scope global cvl_0_0 00:18:19.611 valid_lft forever preferred_lft forever 00:18:19.611 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:18:19.611 valid_lft forever preferred_lft forever 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:18:19.611 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:19.611 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:18:19.611 altname enp175s0f1np1 00:18:19.611 altname ens801f1np1 00:18:19.611 inet 192.168.100.9/24 scope global cvl_0_1 00:18:19.611 valid_lft forever preferred_lft forever 00:18:19.611 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:18:19.611 valid_lft forever preferred_lft forever 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:19.611 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:19.612 192.168.100.9' 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:19.612 192.168.100.9' 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:19.612 192.168.100.9' 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=344745 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 344745 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 344745 ']' 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.612 01:02:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:19.612 [2024-11-19 01:02:25.436537] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:19.612 [2024-11-19 01:02:25.436630] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.612 [2024-11-19 01:02:25.563595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.612 [2024-11-19 01:02:25.666651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.612 [2024-11-19 01:02:25.666698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.612 [2024-11-19 01:02:25.666708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.612 [2024-11-19 01:02:25.666719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.612 [2024-11-19 01:02:25.666727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.612 [2024-11-19 01:02:25.668072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.612 01:02:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.612 01:02:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:19.612 01:02:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.612 01:02:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.612 01:02:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:19.612 01:02:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.612 01:02:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:19.871 [2024-11-19 01:02:26.485859] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000289c0/0x617000007fc0) succeed. 00:18:19.871 [2024-11-19 01:02:26.494946] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000028b40/0x617000008340) succeed. 00:18:19.871 [2024-11-19 01:02:26.494975] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:18:19.871 01:02:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:19.871 01:02:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:19.871 01:02:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:20.130 Malloc1 00:18:20.130 01:02:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:20.389 Malloc2 00:18:20.389 01:02:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:20.647 01:02:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:20.920 01:02:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:20.920 [2024-11-19 01:02:27.567388] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:20.920 01:02:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:20.920 01:02:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6265531a-e34c-4487-b3ed-36f1d950ceda -a 192.168.100.8 -s 4420 -i 4 00:18:21.181 01:02:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:21.181 01:02:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:21.181 01:02:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.181 01:02:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:21.181 01:02:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:23.082 [ 0]:0x1 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:23.082 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.342 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d95c07e9d28749938900b55217fd9bf3 00:18:23.342 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d95c07e9d28749938900b55217fd9bf3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.342 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:23.342 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:23.342 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.342 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:23.342 [ 0]:0x1 00:18:23.342 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.342 01:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:23.342 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d95c07e9d28749938900b55217fd9bf3 00:18:23.342 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d95c07e9d28749938900b55217fd9bf3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.342 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:23.342 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.342 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:23.342 [ 1]:0x2 00:18:23.342 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:23.342 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.601 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=407ee7f373c8445fb825bae293872b51 00:18:23.601 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 407ee7f373c8445fb825bae293872b51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.601 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:23.601 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:23.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.860 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:24.120 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:24.379 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:24.379 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6265531a-e34c-4487-b3ed-36f1d950ceda -a 192.168.100.8 -s 4420 -i 4 00:18:24.379 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:24.379 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:24.379 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.379 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:24.379 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:24.379 01:02:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:26.286 01:02:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:26.286 01:02:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:26.286 01:02:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:26.286 01:02:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:26.286 01:02:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.286 01:02:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:26.286 01:02:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:26.286 01:02:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.545 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.546 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.546 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:26.546 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.546 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:26.546 [ 0]:0x2 00:18:26.546 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:26.546 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.546 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=407ee7f373c8445fb825bae293872b51 00:18:26.546 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 407ee7f373c8445fb825bae293872b51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.546 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:26.805 [ 0]:0x1 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d95c07e9d28749938900b55217fd9bf3 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d95c07e9d28749938900b55217fd9bf3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:26.805 [ 1]:0x2 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=407ee7f373c8445fb825bae293872b51 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 407ee7f373c8445fb825bae293872b51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:26.805 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.064 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.065 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.065 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:27.065 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.065 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:27.065 [ 0]:0x2 00:18:27.065 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.065 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.065 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=407ee7f373c8445fb825bae293872b51 00:18:27.065 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 407ee7f373c8445fb825bae293872b51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.065 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:27.065 01:02:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:27.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.633 01:02:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:27.633 01:02:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:27.633 01:02:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6265531a-e34c-4487-b3ed-36f1d950ceda -a 192.168.100.8 -s 4420 -i 4 00:18:27.892 01:02:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:27.892 01:02:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:27.892 01:02:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.892 01:02:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:27.892 01:02:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:27.892 01:02:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:29.798 [ 0]:0x1 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d95c07e9d28749938900b55217fd9bf3 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d95c07e9d28749938900b55217fd9bf3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:29.798 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:29.799 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:29.799 [ 1]:0x2 00:18:29.799 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:29.799 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=407ee7f373c8445fb825bae293872b51 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 407ee7f373c8445fb825bae293872b51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:30.058 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:30.317 [ 0]:0x2 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=407ee7f373c8445fb825bae293872b51 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 407ee7f373c8445fb825bae293872b51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:18:30.317 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.318 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:18:30.318 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.318 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:18:30.318 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.318 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:18:30.318 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:18:30.318 01:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:30.318 [2024-11-19 01:02:36.997337] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:30.318 request: 00:18:30.318 { 00:18:30.318 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.318 "nsid": 2, 00:18:30.318 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.318 "method": "nvmf_ns_remove_host", 00:18:30.318 "req_id": 1 00:18:30.318 } 00:18:30.318 Got JSON-RPC error response 00:18:30.318 response: 00:18:30.318 { 00:18:30.318 "code": -32602, 00:18:30.318 "message": "Invalid parameters" 00:18:30.318 } 00:18:30.577 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:30.577 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.577 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.577 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.577 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:30.577 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:30.578 [ 0]:0x2 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=407ee7f373c8445fb825bae293872b51 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 407ee7f373c8445fb825bae293872b51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:30.578 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:30.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.837 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=346788 00:18:30.837 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:30.837 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.837 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 346788 /var/tmp/host.sock 00:18:30.837 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 346788 ']' 00:18:30.837 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:30.837 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.837 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:30.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:30.837 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.837 01:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:31.096 [2024-11-19 01:02:37.557274] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:31.096 [2024-11-19 01:02:37.557369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346788 ] 00:18:31.096 [2024-11-19 01:02:37.681375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.356 [2024-11-19 01:02:37.790226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.294 01:02:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.294 01:02:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:32.294 01:02:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:32.294 01:02:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:32.553 01:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c2b15450-f7ee-4a1a-a711-612be530d92d 00:18:32.553 01:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:32.553 01:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C2B15450F7EE4A1AA711612BE530D92D -i 00:18:32.811 01:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid cfa43174-3ba5-47d3-9431-54021a605aa3 00:18:32.811 01:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:32.811 01:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g CFA431743BA547D3943154021A605AA3 -i 00:18:32.811 01:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:33.069 01:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:33.328 01:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:33.328 01:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:33.587 nvme0n1 00:18:33.587 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:33.587 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:33.845 nvme1n2 00:18:33.845 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:33.845 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:33.845 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:33.845 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:33.845 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:34.103 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:34.103 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:34.103 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:34.103 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:34.103 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c2b15450-f7ee-4a1a-a711-612be530d92d == \c\2\b\1\5\4\5\0\-\f\7\e\e\-\4\a\1\a\-\a\7\1\1\-\6\1\2\b\e\5\3\0\d\9\2\d ]] 00:18:34.103 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:34.103 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:34.103 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:34.362 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ cfa43174-3ba5-47d3-9431-54021a605aa3 == \c\f\a\4\3\1\7\4\-\3\b\a\5\-\4\7\d\3\-\9\4\3\1\-\5\4\0\2\1\a\6\0\5\a\a\3 ]] 00:18:34.362 01:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:34.620 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c2b15450-f7ee-4a1a-a711-612be530d92d 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C2B15450F7EE4A1AA711612BE530D92D 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C2B15450F7EE4A1AA711612BE530D92D 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C2B15450F7EE4A1AA711612BE530D92D 00:18:34.916 [2024-11-19 01:02:41.527927] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:34.916 [2024-11-19 01:02:41.527971] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:34.916 [2024-11-19 01:02:41.527984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.916 request: 00:18:34.916 { 00:18:34.916 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.916 "namespace": { 00:18:34.916 "bdev_name": "invalid", 00:18:34.916 "nsid": 1, 00:18:34.916 "nguid": "C2B15450F7EE4A1AA711612BE530D92D", 00:18:34.916 "no_auto_visible": false 00:18:34.916 }, 00:18:34.916 "method": "nvmf_subsystem_add_ns", 00:18:34.916 "req_id": 1 00:18:34.916 } 00:18:34.916 Got JSON-RPC error response 00:18:34.916 response: 00:18:34.916 { 00:18:34.916 "code": -32602, 00:18:34.916 "message": "Invalid parameters" 00:18:34.916 } 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c2b15450-f7ee-4a1a-a711-612be530d92d 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:34.916 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C2B15450F7EE4A1AA711612BE530D92D -i 00:18:35.175 01:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:37.710 01:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:37.710 01:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:37.710 01:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:37.710 01:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:37.710 01:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 346788 00:18:37.710 01:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 346788 ']' 00:18:37.710 01:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 346788 00:18:37.710 01:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:37.710 01:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.710 01:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346788 00:18:37.710 01:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:37.710 01:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:37.710 01:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346788' 00:18:37.710 killing process with pid 346788 00:18:37.710 01:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 346788 00:18:37.710 01:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 346788 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:40.244 rmmod nvme_rdma 00:18:40.244 rmmod nvme_fabrics 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 344745 ']' 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 344745 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 344745 ']' 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 344745 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 344745 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 344745' 00:18:40.244 killing process with pid 344745 00:18:40.244 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 344745 00:18:40.245 01:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 344745 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:41.624 00:18:41.624 real 0m28.859s 00:18:41.624 user 0m39.164s 00:18:41.624 sys 0m6.878s 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:41.624 ************************************ 00:18:41.624 END TEST nvmf_ns_masking 00:18:41.624 ************************************ 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.624 ************************************ 00:18:41.624 START TEST nvmf_nvme_cli 00:18:41.624 ************************************ 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:18:41.624 * Looking for test storage... 00:18:41.624 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:41.624 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.885 --rc genhtml_branch_coverage=1 00:18:41.885 --rc genhtml_function_coverage=1 00:18:41.885 --rc genhtml_legend=1 00:18:41.885 --rc geninfo_all_blocks=1 00:18:41.885 --rc geninfo_unexecuted_blocks=1 00:18:41.885 00:18:41.885 ' 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.885 --rc genhtml_branch_coverage=1 00:18:41.885 --rc genhtml_function_coverage=1 00:18:41.885 --rc genhtml_legend=1 00:18:41.885 --rc geninfo_all_blocks=1 00:18:41.885 --rc geninfo_unexecuted_blocks=1 00:18:41.885 00:18:41.885 ' 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.885 --rc genhtml_branch_coverage=1 00:18:41.885 --rc genhtml_function_coverage=1 00:18:41.885 --rc genhtml_legend=1 00:18:41.885 --rc geninfo_all_blocks=1 00:18:41.885 --rc geninfo_unexecuted_blocks=1 00:18:41.885 00:18:41.885 ' 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.885 --rc genhtml_branch_coverage=1 00:18:41.885 --rc genhtml_function_coverage=1 00:18:41.885 --rc genhtml_legend=1 00:18:41.885 --rc geninfo_all_blocks=1 00:18:41.885 --rc geninfo_unexecuted_blocks=1 00:18:41.885 00:18:41.885 ' 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:41.885 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:41.886 01:02:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.459 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:48.460 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:48.460 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:48.460 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:48.460 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:48.460 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:48.460 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:48.460 01:02:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:48.460 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:48.460 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@405 -- # modinfo irdma 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:48.460 Found net devices under 0000:af:00.0: cvl_0_0 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:48.460 Found net devices under 0000:af:00.1: cvl_0_1 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:48.460 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:18:48.460 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:48.460 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:18:48.460 altname enp175s0f0np0 00:18:48.460 altname ens801f0np0 00:18:48.460 inet 192.168.100.8/24 scope global cvl_0_0 00:18:48.460 valid_lft forever preferred_lft forever 00:18:48.461 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:18:48.461 valid_lft forever preferred_lft forever 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:18:48.461 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:48.461 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:18:48.461 altname enp175s0f1np1 00:18:48.461 altname ens801f1np1 00:18:48.461 inet 192.168.100.9/24 scope global cvl_0_1 00:18:48.461 valid_lft forever preferred_lft forever 00:18:48.461 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:18:48.461 valid_lft forever preferred_lft forever 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo cvl_0_0 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo cvl_0_1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:48.461 192.168.100.9' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:48.461 192.168.100.9' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:48.461 192.168.100.9' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=351621 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 351621 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 351621 ']' 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.461 01:02:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.461 [2024-11-19 01:02:54.324974] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:48.461 [2024-11-19 01:02:54.325080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.461 [2024-11-19 01:02:54.455187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:48.461 [2024-11-19 01:02:54.562978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.461 [2024-11-19 01:02:54.563025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.461 [2024-11-19 01:02:54.563036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.461 [2024-11-19 01:02:54.563046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.461 [2024-11-19 01:02:54.563055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.461 [2024-11-19 01:02:54.565377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.461 [2024-11-19 01:02:54.565476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.461 [2024-11-19 01:02:54.565542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.461 [2024-11-19 01:02:54.565563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.461 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.461 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:48.461 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.461 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.461 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.721 [2024-11-19 01:02:55.198802] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:18:48.721 [2024-11-19 01:02:55.208368] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:18:48.721 [2024-11-19 01:02:55.208396] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.721 Malloc0 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.721 Malloc1 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.721 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.981 [2024-11-19 01:02:55.418862] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:18:48.981 00:18:48.981 Discovery Log Number of Records 2, Generation counter 2 00:18:48.981 =====Discovery Log Entry 0====== 00:18:48.981 trtype: rdma 00:18:48.981 adrfam: ipv4 00:18:48.981 subtype: current discovery subsystem 00:18:48.981 treq: not required 00:18:48.981 portid: 0 00:18:48.981 trsvcid: 4420 00:18:48.981 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:48.981 traddr: 192.168.100.8 00:18:48.981 eflags: explicit discovery connections, duplicate discovery information 00:18:48.981 rdma_prtype: not specified 00:18:48.981 rdma_qptype: connected 00:18:48.981 rdma_cms: rdma-cm 00:18:48.981 rdma_pkey: 0x0000 00:18:48.981 =====Discovery Log Entry 1====== 00:18:48.981 trtype: rdma 00:18:48.981 adrfam: ipv4 00:18:48.981 subtype: nvme subsystem 00:18:48.981 treq: not required 00:18:48.981 portid: 0 00:18:48.981 trsvcid: 4420 00:18:48.981 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:48.981 traddr: 192.168.100.8 00:18:48.981 eflags: none 00:18:48.981 rdma_prtype: not specified 00:18:48.981 rdma_qptype: connected 00:18:48.981 rdma_cms: rdma-cm 00:18:48.981 rdma_pkey: 0x0000 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:18:48.981 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:49.240 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:49.240 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:49.240 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.240 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:49.240 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:49.240 01:02:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:51.147 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:51.147 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:51.147 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:51.147 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:51.147 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.147 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:51.147 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:51.147 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:51.147 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.147 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:51.406 /dev/nvme0n2 00:18:51.406 /dev/nvme1n1 00:18:51.406 /dev/nvme1n2 ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=4 00:18:51.406 01:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:52.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:52.344 rmmod nvme_rdma 00:18:52.344 rmmod nvme_fabrics 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 351621 ']' 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 351621 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 351621 ']' 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 351621 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351621 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351621' 00:18:52.344 killing process with pid 351621 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 351621 00:18:52.344 01:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 351621 00:18:54.254 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:54.254 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:54.254 00:18:54.254 real 0m12.320s 00:18:54.254 user 0m24.588s 00:18:54.254 sys 0m5.059s 00:18:54.254 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.254 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:54.254 ************************************ 00:18:54.254 END TEST nvmf_nvme_cli 00:18:54.254 ************************************ 00:18:54.254 01:03:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:54.254 01:03:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:18:54.254 01:03:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.254 01:03:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.254 01:03:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:54.255 ************************************ 00:18:54.255 START TEST nvmf_auth_target 00:18:54.255 ************************************ 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:18:54.255 * Looking for test storage... 00:18:54.255 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:54.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.255 --rc genhtml_branch_coverage=1 00:18:54.255 --rc genhtml_function_coverage=1 00:18:54.255 --rc genhtml_legend=1 00:18:54.255 --rc geninfo_all_blocks=1 00:18:54.255 --rc geninfo_unexecuted_blocks=1 00:18:54.255 00:18:54.255 ' 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:54.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.255 --rc genhtml_branch_coverage=1 00:18:54.255 --rc genhtml_function_coverage=1 00:18:54.255 --rc genhtml_legend=1 00:18:54.255 --rc geninfo_all_blocks=1 00:18:54.255 --rc geninfo_unexecuted_blocks=1 00:18:54.255 00:18:54.255 ' 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:54.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.255 --rc genhtml_branch_coverage=1 00:18:54.255 --rc genhtml_function_coverage=1 00:18:54.255 --rc genhtml_legend=1 00:18:54.255 --rc geninfo_all_blocks=1 00:18:54.255 --rc geninfo_unexecuted_blocks=1 00:18:54.255 00:18:54.255 ' 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:54.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.255 --rc genhtml_branch_coverage=1 00:18:54.255 --rc genhtml_function_coverage=1 00:18:54.255 --rc genhtml_legend=1 00:18:54.255 --rc geninfo_all_blocks=1 00:18:54.255 --rc geninfo_unexecuted_blocks=1 00:18:54.255 00:18:54.255 ' 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.255 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:54.256 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:54.256 01:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:00.830 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:00.830 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:00.830 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@405 -- # modinfo irdma 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:00.831 Found net devices under 0000:af:00.0: cvl_0_0 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:00.831 Found net devices under 0000:af:00.1: cvl_0_1 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:19:00.831 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:19:00.831 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:19:00.831 altname enp175s0f0np0 00:19:00.831 altname ens801f0np0 00:19:00.831 inet 192.168.100.8/24 scope global cvl_0_0 00:19:00.831 valid_lft forever preferred_lft forever 00:19:00.831 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:19:00.831 valid_lft forever preferred_lft forever 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:19:00.831 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:19:00.831 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:19:00.831 altname enp175s0f1np1 00:19:00.831 altname ens801f1np1 00:19:00.831 inet 192.168.100.9/24 scope global cvl_0_1 00:19:00.831 valid_lft forever preferred_lft forever 00:19:00.831 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:19:00.831 valid_lft forever preferred_lft forever 00:19:00.831 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:00.832 192.168.100.9' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:00.832 192.168.100.9' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:00.832 192.168.100.9' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=355977 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 355977 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 355977 ']' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.832 01:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.832 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.832 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:00.832 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:00.832 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:00.832 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=356217 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=945e343d741019708f873f83c00da9f88c737e735580b57c 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PGy 00:19:01.092 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 945e343d741019708f873f83c00da9f88c737e735580b57c 0 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 945e343d741019708f873f83c00da9f88c737e735580b57c 0 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=945e343d741019708f873f83c00da9f88c737e735580b57c 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PGy 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PGy 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.PGy 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f0b03d6f3ec3c05a7f8e91958b071017393ded79db7ebf34052d4793bae3a9db 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.t87 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f0b03d6f3ec3c05a7f8e91958b071017393ded79db7ebf34052d4793bae3a9db 3 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f0b03d6f3ec3c05a7f8e91958b071017393ded79db7ebf34052d4793bae3a9db 3 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f0b03d6f3ec3c05a7f8e91958b071017393ded79db7ebf34052d4793bae3a9db 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.t87 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.t87 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.t87 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6b48271db305bf31a946cbf7735e6d2f 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BZB 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6b48271db305bf31a946cbf7735e6d2f 1 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6b48271db305bf31a946cbf7735e6d2f 1 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6b48271db305bf31a946cbf7735e6d2f 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BZB 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BZB 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.BZB 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9dce7c81443c6ecaf3be3d358a7497b76bf6972cf75505a9 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wKU 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9dce7c81443c6ecaf3be3d358a7497b76bf6972cf75505a9 2 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9dce7c81443c6ecaf3be3d358a7497b76bf6972cf75505a9 2 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9dce7c81443c6ecaf3be3d358a7497b76bf6972cf75505a9 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wKU 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wKU 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.wKU 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:01.093 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d8ba029e63e2ee96877801196c426b95d96950cae3ac184b 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Rdj 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d8ba029e63e2ee96877801196c426b95d96950cae3ac184b 2 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d8ba029e63e2ee96877801196c426b95d96950cae3ac184b 2 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d8ba029e63e2ee96877801196c426b95d96950cae3ac184b 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Rdj 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Rdj 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Rdj 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5da91a9006a5f5b8a0ab694a735de57b 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tNA 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5da91a9006a5f5b8a0ab694a735de57b 1 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5da91a9006a5f5b8a0ab694a735de57b 1 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.353 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5da91a9006a5f5b8a0ab694a735de57b 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tNA 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tNA 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.tNA 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5f83c171422a7703f2a86e8f82c130bee57ab3a2790ade24cf09654a5aa772d7 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Yli 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5f83c171422a7703f2a86e8f82c130bee57ab3a2790ade24cf09654a5aa772d7 3 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5f83c171422a7703f2a86e8f82c130bee57ab3a2790ade24cf09654a5aa772d7 3 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5f83c171422a7703f2a86e8f82c130bee57ab3a2790ade24cf09654a5aa772d7 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Yli 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Yli 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Yli 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 355977 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 355977 ']' 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.354 01:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.613 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.613 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:01.613 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 356217 /var/tmp/host.sock 00:19:01.613 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 356217 ']' 00:19:01.613 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:01.613 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.613 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:01.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:01.613 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.613 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PGy 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.PGy 00:19:02.182 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.PGy 00:19:02.441 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.t87 ]] 00:19:02.441 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.t87 00:19:02.441 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.441 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.441 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.441 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.t87 00:19:02.441 01:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.t87 00:19:02.441 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:02.441 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BZB 00:19:02.441 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.441 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.707 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.707 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BZB 00:19:02.707 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BZB 00:19:02.707 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.wKU ]] 00:19:02.707 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wKU 00:19:02.707 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.707 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.707 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.707 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wKU 00:19:02.707 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wKU 00:19:02.966 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:02.966 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Rdj 00:19:02.966 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.966 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.966 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.966 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Rdj 00:19:02.966 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Rdj 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.tNA ]] 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tNA 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tNA 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tNA 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Yli 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Yli 00:19:03.225 01:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Yli 00:19:03.485 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:03.485 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:03.485 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.485 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.485 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:03.485 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.744 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.003 00:19:04.003 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.003 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.003 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.262 { 00:19:04.262 "cntlid": 1, 00:19:04.262 "qid": 0, 00:19:04.262 "state": "enabled", 00:19:04.262 "thread": "nvmf_tgt_poll_group_000", 00:19:04.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:04.262 "listen_address": { 00:19:04.262 "trtype": "RDMA", 00:19:04.262 "adrfam": "IPv4", 00:19:04.262 "traddr": "192.168.100.8", 00:19:04.262 "trsvcid": "4420" 00:19:04.262 }, 00:19:04.262 "peer_address": { 00:19:04.262 "trtype": "RDMA", 00:19:04.262 "adrfam": "IPv4", 00:19:04.262 "traddr": "192.168.100.8", 00:19:04.262 "trsvcid": "51102" 00:19:04.262 }, 00:19:04.262 "auth": { 00:19:04.262 "state": "completed", 00:19:04.262 "digest": "sha256", 00:19:04.262 "dhgroup": "null" 00:19:04.262 } 00:19:04.262 } 00:19:04.262 ]' 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.262 01:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.521 01:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:04.521 01:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.715 01:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.715 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.715 { 00:19:08.715 "cntlid": 3, 00:19:08.715 "qid": 0, 00:19:08.715 "state": "enabled", 00:19:08.715 "thread": "nvmf_tgt_poll_group_000", 00:19:08.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:08.715 "listen_address": { 00:19:08.715 "trtype": "RDMA", 00:19:08.715 "adrfam": "IPv4", 00:19:08.715 "traddr": "192.168.100.8", 00:19:08.715 "trsvcid": "4420" 00:19:08.715 }, 00:19:08.715 "peer_address": { 00:19:08.715 "trtype": "RDMA", 00:19:08.715 "adrfam": "IPv4", 00:19:08.715 "traddr": "192.168.100.8", 00:19:08.715 "trsvcid": "42091" 00:19:08.715 }, 00:19:08.715 "auth": { 00:19:08.715 "state": "completed", 00:19:08.715 "digest": "sha256", 00:19:08.715 "dhgroup": "null" 00:19:08.715 } 00:19:08.715 } 00:19:08.715 ]' 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.715 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.974 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:08.974 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.974 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.974 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.974 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.233 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:09.233 01:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:09.799 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.799 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:09.799 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.799 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.799 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.799 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.799 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:09.799 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.059 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.318 00:19:10.318 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.318 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.318 01:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.577 { 00:19:10.577 "cntlid": 5, 00:19:10.577 "qid": 0, 00:19:10.577 "state": "enabled", 00:19:10.577 "thread": "nvmf_tgt_poll_group_000", 00:19:10.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:10.577 "listen_address": { 00:19:10.577 "trtype": "RDMA", 00:19:10.577 "adrfam": "IPv4", 00:19:10.577 "traddr": "192.168.100.8", 00:19:10.577 "trsvcid": "4420" 00:19:10.577 }, 00:19:10.577 "peer_address": { 00:19:10.577 "trtype": "RDMA", 00:19:10.577 "adrfam": "IPv4", 00:19:10.577 "traddr": "192.168.100.8", 00:19:10.577 "trsvcid": "40827" 00:19:10.577 }, 00:19:10.577 "auth": { 00:19:10.577 "state": "completed", 00:19:10.577 "digest": "sha256", 00:19:10.577 "dhgroup": "null" 00:19:10.577 } 00:19:10.577 } 00:19:10.577 ]' 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.577 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.837 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:10.837 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:11.403 01:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.662 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.921 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.921 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:11.921 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.922 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.922 00:19:11.922 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.922 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.922 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.180 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.180 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.180 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.180 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.180 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.180 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.180 { 00:19:12.180 "cntlid": 7, 00:19:12.180 "qid": 0, 00:19:12.180 "state": "enabled", 00:19:12.180 "thread": "nvmf_tgt_poll_group_000", 00:19:12.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:12.180 "listen_address": { 00:19:12.180 "trtype": "RDMA", 00:19:12.180 "adrfam": "IPv4", 00:19:12.180 "traddr": "192.168.100.8", 00:19:12.180 "trsvcid": "4420" 00:19:12.180 }, 00:19:12.180 "peer_address": { 00:19:12.180 "trtype": "RDMA", 00:19:12.180 "adrfam": "IPv4", 00:19:12.180 "traddr": "192.168.100.8", 00:19:12.180 "trsvcid": "35425" 00:19:12.180 }, 00:19:12.180 "auth": { 00:19:12.180 "state": "completed", 00:19:12.180 "digest": "sha256", 00:19:12.180 "dhgroup": "null" 00:19:12.180 } 00:19:12.180 } 00:19:12.180 ]' 00:19:12.180 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.439 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.439 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.439 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:12.439 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.439 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.439 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.439 01:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.697 01:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:12.697 01:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:13.264 01:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.264 01:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:13.264 01:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.264 01:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.264 01:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.264 01:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.264 01:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.264 01:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.264 01:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.523 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.782 00:19:13.782 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.782 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.782 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.041 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.041 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.041 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.041 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.041 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.041 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.041 { 00:19:14.041 "cntlid": 9, 00:19:14.041 "qid": 0, 00:19:14.041 "state": "enabled", 00:19:14.041 "thread": "nvmf_tgt_poll_group_000", 00:19:14.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:14.041 "listen_address": { 00:19:14.041 "trtype": "RDMA", 00:19:14.041 "adrfam": "IPv4", 00:19:14.041 "traddr": "192.168.100.8", 00:19:14.041 "trsvcid": "4420" 00:19:14.041 }, 00:19:14.041 "peer_address": { 00:19:14.041 "trtype": "RDMA", 00:19:14.041 "adrfam": "IPv4", 00:19:14.041 "traddr": "192.168.100.8", 00:19:14.041 "trsvcid": "34720" 00:19:14.041 }, 00:19:14.041 "auth": { 00:19:14.041 "state": "completed", 00:19:14.041 "digest": "sha256", 00:19:14.041 "dhgroup": "ffdhe2048" 00:19:14.041 } 00:19:14.041 } 00:19:14.041 ]' 00:19:14.041 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.041 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.041 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.041 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.041 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.300 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.300 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.300 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.300 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:14.300 01:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:14.868 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.127 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:15.127 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.127 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.127 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.127 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.127 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.127 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.386 01:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.645 00:19:15.645 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.645 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.645 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.905 { 00:19:15.905 "cntlid": 11, 00:19:15.905 "qid": 0, 00:19:15.905 "state": "enabled", 00:19:15.905 "thread": "nvmf_tgt_poll_group_000", 00:19:15.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:15.905 "listen_address": { 00:19:15.905 "trtype": "RDMA", 00:19:15.905 "adrfam": "IPv4", 00:19:15.905 "traddr": "192.168.100.8", 00:19:15.905 "trsvcid": "4420" 00:19:15.905 }, 00:19:15.905 "peer_address": { 00:19:15.905 "trtype": "RDMA", 00:19:15.905 "adrfam": "IPv4", 00:19:15.905 "traddr": "192.168.100.8", 00:19:15.905 "trsvcid": "50179" 00:19:15.905 }, 00:19:15.905 "auth": { 00:19:15.905 "state": "completed", 00:19:15.905 "digest": "sha256", 00:19:15.905 "dhgroup": "ffdhe2048" 00:19:15.905 } 00:19:15.905 } 00:19:15.905 ]' 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.905 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.164 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:16.164 01:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:16.732 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.992 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.251 00:19:17.251 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.251 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.251 01:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.511 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.511 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.511 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.511 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.511 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.511 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.511 { 00:19:17.511 "cntlid": 13, 00:19:17.511 "qid": 0, 00:19:17.511 "state": "enabled", 00:19:17.511 "thread": "nvmf_tgt_poll_group_000", 00:19:17.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:17.511 "listen_address": { 00:19:17.511 "trtype": "RDMA", 00:19:17.511 "adrfam": "IPv4", 00:19:17.511 "traddr": "192.168.100.8", 00:19:17.511 "trsvcid": "4420" 00:19:17.511 }, 00:19:17.511 "peer_address": { 00:19:17.511 "trtype": "RDMA", 00:19:17.511 "adrfam": "IPv4", 00:19:17.511 "traddr": "192.168.100.8", 00:19:17.511 "trsvcid": "36747" 00:19:17.511 }, 00:19:17.511 "auth": { 00:19:17.511 "state": "completed", 00:19:17.511 "digest": "sha256", 00:19:17.511 "dhgroup": "ffdhe2048" 00:19:17.511 } 00:19:17.511 } 00:19:17.511 ]' 00:19:17.511 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.770 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.770 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.770 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.770 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.770 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.770 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.770 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.029 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:18.029 01:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:18.598 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.598 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:18.598 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.598 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.598 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.598 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.598 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:18.598 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.856 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.115 00:19:19.115 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.115 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.115 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.374 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.374 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.374 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.374 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.374 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.374 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.374 { 00:19:19.374 "cntlid": 15, 00:19:19.374 "qid": 0, 00:19:19.374 "state": "enabled", 00:19:19.374 "thread": "nvmf_tgt_poll_group_000", 00:19:19.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:19.374 "listen_address": { 00:19:19.374 "trtype": "RDMA", 00:19:19.374 "adrfam": "IPv4", 00:19:19.374 "traddr": "192.168.100.8", 00:19:19.374 "trsvcid": "4420" 00:19:19.374 }, 00:19:19.374 "peer_address": { 00:19:19.374 "trtype": "RDMA", 00:19:19.374 "adrfam": "IPv4", 00:19:19.374 "traddr": "192.168.100.8", 00:19:19.374 "trsvcid": "37297" 00:19:19.374 }, 00:19:19.374 "auth": { 00:19:19.374 "state": "completed", 00:19:19.374 "digest": "sha256", 00:19:19.374 "dhgroup": "ffdhe2048" 00:19:19.374 } 00:19:19.374 } 00:19:19.374 ]' 00:19:19.374 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.374 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.374 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.374 01:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.374 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.374 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.374 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.374 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.633 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:19.633 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:20.201 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.460 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:20.460 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.460 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.460 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.460 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.460 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.460 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.460 01:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.460 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:20.460 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.460 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.460 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:20.460 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:20.460 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.460 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.460 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.460 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.719 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.719 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.719 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.719 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.719 00:19:20.987 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.987 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.987 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.987 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.987 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.987 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.987 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.987 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.987 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.987 { 00:19:20.987 "cntlid": 17, 00:19:20.987 "qid": 0, 00:19:20.987 "state": "enabled", 00:19:20.987 "thread": "nvmf_tgt_poll_group_000", 00:19:20.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:20.988 "listen_address": { 00:19:20.988 "trtype": "RDMA", 00:19:20.988 "adrfam": "IPv4", 00:19:20.988 "traddr": "192.168.100.8", 00:19:20.988 "trsvcid": "4420" 00:19:20.988 }, 00:19:20.988 "peer_address": { 00:19:20.988 "trtype": "RDMA", 00:19:20.988 "adrfam": "IPv4", 00:19:20.988 "traddr": "192.168.100.8", 00:19:20.988 "trsvcid": "60574" 00:19:20.988 }, 00:19:20.988 "auth": { 00:19:20.988 "state": "completed", 00:19:20.988 "digest": "sha256", 00:19:20.988 "dhgroup": "ffdhe3072" 00:19:20.988 } 00:19:20.988 } 00:19:20.988 ]' 00:19:20.988 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.988 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.988 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.248 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.248 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.248 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.248 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.248 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.507 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:21.508 01:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:22.076 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.076 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:22.076 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.076 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.076 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.076 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.076 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.076 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.335 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:22.335 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.336 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.336 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.336 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:22.336 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.336 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.336 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.336 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.336 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.336 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.336 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.336 01:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.594 00:19:22.594 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.594 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.594 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.853 { 00:19:22.853 "cntlid": 19, 00:19:22.853 "qid": 0, 00:19:22.853 "state": "enabled", 00:19:22.853 "thread": "nvmf_tgt_poll_group_000", 00:19:22.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:22.853 "listen_address": { 00:19:22.853 "trtype": "RDMA", 00:19:22.853 "adrfam": "IPv4", 00:19:22.853 "traddr": "192.168.100.8", 00:19:22.853 "trsvcid": "4420" 00:19:22.853 }, 00:19:22.853 "peer_address": { 00:19:22.853 "trtype": "RDMA", 00:19:22.853 "adrfam": "IPv4", 00:19:22.853 "traddr": "192.168.100.8", 00:19:22.853 "trsvcid": "38476" 00:19:22.853 }, 00:19:22.853 "auth": { 00:19:22.853 "state": "completed", 00:19:22.853 "digest": "sha256", 00:19:22.853 "dhgroup": "ffdhe3072" 00:19:22.853 } 00:19:22.853 } 00:19:22.853 ]' 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.853 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.112 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:23.112 01:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:23.679 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.939 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:23.939 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.939 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.939 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.939 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.939 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.939 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.198 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.458 00:19:24.458 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.458 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.458 01:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.458 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.458 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.458 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.458 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.718 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.718 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.718 { 00:19:24.718 "cntlid": 21, 00:19:24.718 "qid": 0, 00:19:24.718 "state": "enabled", 00:19:24.718 "thread": "nvmf_tgt_poll_group_000", 00:19:24.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:24.718 "listen_address": { 00:19:24.718 "trtype": "RDMA", 00:19:24.718 "adrfam": "IPv4", 00:19:24.718 "traddr": "192.168.100.8", 00:19:24.718 "trsvcid": "4420" 00:19:24.718 }, 00:19:24.718 "peer_address": { 00:19:24.718 "trtype": "RDMA", 00:19:24.718 "adrfam": "IPv4", 00:19:24.718 "traddr": "192.168.100.8", 00:19:24.718 "trsvcid": "60061" 00:19:24.718 }, 00:19:24.718 "auth": { 00:19:24.718 "state": "completed", 00:19:24.718 "digest": "sha256", 00:19:24.718 "dhgroup": "ffdhe3072" 00:19:24.718 } 00:19:24.718 } 00:19:24.718 ]' 00:19:24.718 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.718 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.718 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.718 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.718 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.718 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.718 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.718 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.977 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:24.977 01:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:25.546 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.546 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:25.546 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.546 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.546 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.546 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.546 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:25.546 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.805 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.064 00:19:26.064 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.064 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.065 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.323 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.323 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.323 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.323 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.323 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.323 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.323 { 00:19:26.323 "cntlid": 23, 00:19:26.323 "qid": 0, 00:19:26.323 "state": "enabled", 00:19:26.323 "thread": "nvmf_tgt_poll_group_000", 00:19:26.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:26.323 "listen_address": { 00:19:26.323 "trtype": "RDMA", 00:19:26.323 "adrfam": "IPv4", 00:19:26.323 "traddr": "192.168.100.8", 00:19:26.323 "trsvcid": "4420" 00:19:26.323 }, 00:19:26.323 "peer_address": { 00:19:26.323 "trtype": "RDMA", 00:19:26.323 "adrfam": "IPv4", 00:19:26.323 "traddr": "192.168.100.8", 00:19:26.323 "trsvcid": "39505" 00:19:26.323 }, 00:19:26.323 "auth": { 00:19:26.323 "state": "completed", 00:19:26.323 "digest": "sha256", 00:19:26.323 "dhgroup": "ffdhe3072" 00:19:26.323 } 00:19:26.323 } 00:19:26.323 ]' 00:19:26.323 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.323 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.323 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.323 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.324 01:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.583 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.583 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.583 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.583 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:26.583 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:27.151 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.410 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:27.410 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.410 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.410 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.410 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.410 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.410 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.410 01:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.669 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.928 00:19:27.928 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.928 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.928 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.187 { 00:19:28.187 "cntlid": 25, 00:19:28.187 "qid": 0, 00:19:28.187 "state": "enabled", 00:19:28.187 "thread": "nvmf_tgt_poll_group_000", 00:19:28.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:28.187 "listen_address": { 00:19:28.187 "trtype": "RDMA", 00:19:28.187 "adrfam": "IPv4", 00:19:28.187 "traddr": "192.168.100.8", 00:19:28.187 "trsvcid": "4420" 00:19:28.187 }, 00:19:28.187 "peer_address": { 00:19:28.187 "trtype": "RDMA", 00:19:28.187 "adrfam": "IPv4", 00:19:28.187 "traddr": "192.168.100.8", 00:19:28.187 "trsvcid": "35136" 00:19:28.187 }, 00:19:28.187 "auth": { 00:19:28.187 "state": "completed", 00:19:28.187 "digest": "sha256", 00:19:28.187 "dhgroup": "ffdhe4096" 00:19:28.187 } 00:19:28.187 } 00:19:28.187 ]' 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.187 01:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.446 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:28.446 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:29.014 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.274 01:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.842 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.842 { 00:19:29.842 "cntlid": 27, 00:19:29.842 "qid": 0, 00:19:29.842 "state": "enabled", 00:19:29.842 "thread": "nvmf_tgt_poll_group_000", 00:19:29.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:29.842 "listen_address": { 00:19:29.842 "trtype": "RDMA", 00:19:29.842 "adrfam": "IPv4", 00:19:29.842 "traddr": "192.168.100.8", 00:19:29.842 "trsvcid": "4420" 00:19:29.842 }, 00:19:29.842 "peer_address": { 00:19:29.842 "trtype": "RDMA", 00:19:29.842 "adrfam": "IPv4", 00:19:29.842 "traddr": "192.168.100.8", 00:19:29.842 "trsvcid": "42029" 00:19:29.842 }, 00:19:29.842 "auth": { 00:19:29.842 "state": "completed", 00:19:29.842 "digest": "sha256", 00:19:29.842 "dhgroup": "ffdhe4096" 00:19:29.842 } 00:19:29.842 } 00:19:29.842 ]' 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.842 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.102 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.102 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.102 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.362 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:30.362 01:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:30.930 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.930 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:30.930 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.930 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.930 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.930 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.930 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.930 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.189 01:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.449 00:19:31.449 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.449 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.449 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.709 { 00:19:31.709 "cntlid": 29, 00:19:31.709 "qid": 0, 00:19:31.709 "state": "enabled", 00:19:31.709 "thread": "nvmf_tgt_poll_group_000", 00:19:31.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:31.709 "listen_address": { 00:19:31.709 "trtype": "RDMA", 00:19:31.709 "adrfam": "IPv4", 00:19:31.709 "traddr": "192.168.100.8", 00:19:31.709 "trsvcid": "4420" 00:19:31.709 }, 00:19:31.709 "peer_address": { 00:19:31.709 "trtype": "RDMA", 00:19:31.709 "adrfam": "IPv4", 00:19:31.709 "traddr": "192.168.100.8", 00:19:31.709 "trsvcid": "49553" 00:19:31.709 }, 00:19:31.709 "auth": { 00:19:31.709 "state": "completed", 00:19:31.709 "digest": "sha256", 00:19:31.709 "dhgroup": "ffdhe4096" 00:19:31.709 } 00:19:31.709 } 00:19:31.709 ]' 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.709 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.968 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:31.968 01:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:32.535 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.793 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:32.793 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.793 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.793 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.793 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.793 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.793 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.052 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.311 00:19:33.311 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.311 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.311 01:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.311 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.570 { 00:19:33.570 "cntlid": 31, 00:19:33.570 "qid": 0, 00:19:33.570 "state": "enabled", 00:19:33.570 "thread": "nvmf_tgt_poll_group_000", 00:19:33.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:33.570 "listen_address": { 00:19:33.570 "trtype": "RDMA", 00:19:33.570 "adrfam": "IPv4", 00:19:33.570 "traddr": "192.168.100.8", 00:19:33.570 "trsvcid": "4420" 00:19:33.570 }, 00:19:33.570 "peer_address": { 00:19:33.570 "trtype": "RDMA", 00:19:33.570 "adrfam": "IPv4", 00:19:33.570 "traddr": "192.168.100.8", 00:19:33.570 "trsvcid": "37648" 00:19:33.570 }, 00:19:33.570 "auth": { 00:19:33.570 "state": "completed", 00:19:33.570 "digest": "sha256", 00:19:33.570 "dhgroup": "ffdhe4096" 00:19:33.570 } 00:19:33.570 } 00:19:33.570 ]' 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.570 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.829 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:33.829 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:34.396 01:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.396 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:34.396 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.397 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.656 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.224 00:19:35.224 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.224 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.224 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.224 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.224 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.224 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.224 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.224 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.224 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.224 { 00:19:35.224 "cntlid": 33, 00:19:35.224 "qid": 0, 00:19:35.224 "state": "enabled", 00:19:35.224 "thread": "nvmf_tgt_poll_group_000", 00:19:35.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:35.224 "listen_address": { 00:19:35.225 "trtype": "RDMA", 00:19:35.225 "adrfam": "IPv4", 00:19:35.225 "traddr": "192.168.100.8", 00:19:35.225 "trsvcid": "4420" 00:19:35.225 }, 00:19:35.225 "peer_address": { 00:19:35.225 "trtype": "RDMA", 00:19:35.225 "adrfam": "IPv4", 00:19:35.225 "traddr": "192.168.100.8", 00:19:35.225 "trsvcid": "57023" 00:19:35.225 }, 00:19:35.225 "auth": { 00:19:35.225 "state": "completed", 00:19:35.225 "digest": "sha256", 00:19:35.225 "dhgroup": "ffdhe6144" 00:19:35.225 } 00:19:35.225 } 00:19:35.225 ]' 00:19:35.225 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.225 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.225 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.484 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.484 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.484 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.484 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.484 01:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.744 01:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:35.744 01:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:36.312 01:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.312 01:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:36.312 01:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.312 01:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.312 01:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.312 01:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.312 01:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.312 01:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.571 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.140 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.140 { 00:19:37.140 "cntlid": 35, 00:19:37.140 "qid": 0, 00:19:37.140 "state": "enabled", 00:19:37.140 "thread": "nvmf_tgt_poll_group_000", 00:19:37.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:37.140 "listen_address": { 00:19:37.140 "trtype": "RDMA", 00:19:37.140 "adrfam": "IPv4", 00:19:37.140 "traddr": "192.168.100.8", 00:19:37.140 "trsvcid": "4420" 00:19:37.140 }, 00:19:37.140 "peer_address": { 00:19:37.140 "trtype": "RDMA", 00:19:37.140 "adrfam": "IPv4", 00:19:37.140 "traddr": "192.168.100.8", 00:19:37.140 "trsvcid": "42112" 00:19:37.140 }, 00:19:37.140 "auth": { 00:19:37.140 "state": "completed", 00:19:37.140 "digest": "sha256", 00:19:37.140 "dhgroup": "ffdhe6144" 00:19:37.140 } 00:19:37.140 } 00:19:37.140 ]' 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.140 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.399 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.399 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.399 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.399 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.399 01:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.657 01:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:37.657 01:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:38.224 01:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.224 01:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:38.224 01:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.224 01:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.224 01:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.224 01:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.224 01:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:38.224 01:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.483 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.742 00:19:38.742 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.742 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.742 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.000 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.000 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.000 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.000 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.000 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.000 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.000 { 00:19:39.000 "cntlid": 37, 00:19:39.000 "qid": 0, 00:19:39.000 "state": "enabled", 00:19:39.000 "thread": "nvmf_tgt_poll_group_000", 00:19:39.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:39.000 "listen_address": { 00:19:39.000 "trtype": "RDMA", 00:19:39.000 "adrfam": "IPv4", 00:19:39.000 "traddr": "192.168.100.8", 00:19:39.000 "trsvcid": "4420" 00:19:39.000 }, 00:19:39.000 "peer_address": { 00:19:39.000 "trtype": "RDMA", 00:19:39.000 "adrfam": "IPv4", 00:19:39.000 "traddr": "192.168.100.8", 00:19:39.000 "trsvcid": "60518" 00:19:39.000 }, 00:19:39.000 "auth": { 00:19:39.000 "state": "completed", 00:19:39.000 "digest": "sha256", 00:19:39.000 "dhgroup": "ffdhe6144" 00:19:39.000 } 00:19:39.000 } 00:19:39.000 ]' 00:19:39.000 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.000 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.000 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.259 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.259 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.259 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.259 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.259 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.518 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:39.518 01:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:40.085 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.085 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:40.086 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.086 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.086 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.086 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.086 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:40.086 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:40.344 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.345 01:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.603 00:19:40.603 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.603 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.603 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.862 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.862 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.862 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.862 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.862 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.862 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.862 { 00:19:40.862 "cntlid": 39, 00:19:40.863 "qid": 0, 00:19:40.863 "state": "enabled", 00:19:40.863 "thread": "nvmf_tgt_poll_group_000", 00:19:40.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:40.863 "listen_address": { 00:19:40.863 "trtype": "RDMA", 00:19:40.863 "adrfam": "IPv4", 00:19:40.863 "traddr": "192.168.100.8", 00:19:40.863 "trsvcid": "4420" 00:19:40.863 }, 00:19:40.863 "peer_address": { 00:19:40.863 "trtype": "RDMA", 00:19:40.863 "adrfam": "IPv4", 00:19:40.863 "traddr": "192.168.100.8", 00:19:40.863 "trsvcid": "56790" 00:19:40.863 }, 00:19:40.863 "auth": { 00:19:40.863 "state": "completed", 00:19:40.863 "digest": "sha256", 00:19:40.863 "dhgroup": "ffdhe6144" 00:19:40.863 } 00:19:40.863 } 00:19:40.863 ]' 00:19:40.863 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.863 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.863 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.121 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.121 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.122 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.122 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.122 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.381 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:41.381 01:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:41.949 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.949 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:41.949 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.949 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.949 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.949 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.949 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.949 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.949 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.208 01:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.775 00:19:42.775 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.775 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.775 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.034 { 00:19:43.034 "cntlid": 41, 00:19:43.034 "qid": 0, 00:19:43.034 "state": "enabled", 00:19:43.034 "thread": "nvmf_tgt_poll_group_000", 00:19:43.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:43.034 "listen_address": { 00:19:43.034 "trtype": "RDMA", 00:19:43.034 "adrfam": "IPv4", 00:19:43.034 "traddr": "192.168.100.8", 00:19:43.034 "trsvcid": "4420" 00:19:43.034 }, 00:19:43.034 "peer_address": { 00:19:43.034 "trtype": "RDMA", 00:19:43.034 "adrfam": "IPv4", 00:19:43.034 "traddr": "192.168.100.8", 00:19:43.034 "trsvcid": "55222" 00:19:43.034 }, 00:19:43.034 "auth": { 00:19:43.034 "state": "completed", 00:19:43.034 "digest": "sha256", 00:19:43.034 "dhgroup": "ffdhe8192" 00:19:43.034 } 00:19:43.034 } 00:19:43.034 ]' 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.034 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.293 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:43.293 01:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:43.859 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.118 01:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.686 00:19:44.686 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.686 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.686 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.945 { 00:19:44.945 "cntlid": 43, 00:19:44.945 "qid": 0, 00:19:44.945 "state": "enabled", 00:19:44.945 "thread": "nvmf_tgt_poll_group_000", 00:19:44.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:44.945 "listen_address": { 00:19:44.945 "trtype": "RDMA", 00:19:44.945 "adrfam": "IPv4", 00:19:44.945 "traddr": "192.168.100.8", 00:19:44.945 "trsvcid": "4420" 00:19:44.945 }, 00:19:44.945 "peer_address": { 00:19:44.945 "trtype": "RDMA", 00:19:44.945 "adrfam": "IPv4", 00:19:44.945 "traddr": "192.168.100.8", 00:19:44.945 "trsvcid": "54331" 00:19:44.945 }, 00:19:44.945 "auth": { 00:19:44.945 "state": "completed", 00:19:44.945 "digest": "sha256", 00:19:44.945 "dhgroup": "ffdhe8192" 00:19:44.945 } 00:19:44.945 } 00:19:44.945 ]' 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.945 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.205 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.205 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:45.205 01:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:45.772 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.030 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:46.030 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.030 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.030 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.030 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.030 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.030 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.288 01:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.855 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.855 { 00:19:46.855 "cntlid": 45, 00:19:46.855 "qid": 0, 00:19:46.855 "state": "enabled", 00:19:46.855 "thread": "nvmf_tgt_poll_group_000", 00:19:46.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:46.855 "listen_address": { 00:19:46.855 "trtype": "RDMA", 00:19:46.855 "adrfam": "IPv4", 00:19:46.855 "traddr": "192.168.100.8", 00:19:46.855 "trsvcid": "4420" 00:19:46.855 }, 00:19:46.855 "peer_address": { 00:19:46.855 "trtype": "RDMA", 00:19:46.855 "adrfam": "IPv4", 00:19:46.855 "traddr": "192.168.100.8", 00:19:46.855 "trsvcid": "34398" 00:19:46.855 }, 00:19:46.855 "auth": { 00:19:46.855 "state": "completed", 00:19:46.855 "digest": "sha256", 00:19:46.855 "dhgroup": "ffdhe8192" 00:19:46.855 } 00:19:46.855 } 00:19:46.855 ]' 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.855 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.114 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.114 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.114 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.114 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.114 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.372 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:47.372 01:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:47.938 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.938 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:47.938 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.938 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.938 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.938 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.938 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.938 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.197 01:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.763 00:19:48.763 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.763 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.763 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.022 { 00:19:49.022 "cntlid": 47, 00:19:49.022 "qid": 0, 00:19:49.022 "state": "enabled", 00:19:49.022 "thread": "nvmf_tgt_poll_group_000", 00:19:49.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:49.022 "listen_address": { 00:19:49.022 "trtype": "RDMA", 00:19:49.022 "adrfam": "IPv4", 00:19:49.022 "traddr": "192.168.100.8", 00:19:49.022 "trsvcid": "4420" 00:19:49.022 }, 00:19:49.022 "peer_address": { 00:19:49.022 "trtype": "RDMA", 00:19:49.022 "adrfam": "IPv4", 00:19:49.022 "traddr": "192.168.100.8", 00:19:49.022 "trsvcid": "49209" 00:19:49.022 }, 00:19:49.022 "auth": { 00:19:49.022 "state": "completed", 00:19:49.022 "digest": "sha256", 00:19:49.022 "dhgroup": "ffdhe8192" 00:19:49.022 } 00:19:49.022 } 00:19:49.022 ]' 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.022 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.281 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:49.281 01:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:49.848 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.106 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:50.106 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.106 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.106 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.106 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:50.106 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.106 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.107 01:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.364 00:19:50.364 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.364 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.364 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.622 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.622 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.622 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.622 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.622 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.622 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.622 { 00:19:50.622 "cntlid": 49, 00:19:50.622 "qid": 0, 00:19:50.622 "state": "enabled", 00:19:50.622 "thread": "nvmf_tgt_poll_group_000", 00:19:50.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:50.622 "listen_address": { 00:19:50.622 "trtype": "RDMA", 00:19:50.622 "adrfam": "IPv4", 00:19:50.622 "traddr": "192.168.100.8", 00:19:50.622 "trsvcid": "4420" 00:19:50.622 }, 00:19:50.622 "peer_address": { 00:19:50.622 "trtype": "RDMA", 00:19:50.622 "adrfam": "IPv4", 00:19:50.622 "traddr": "192.168.100.8", 00:19:50.622 "trsvcid": "46753" 00:19:50.622 }, 00:19:50.622 "auth": { 00:19:50.622 "state": "completed", 00:19:50.622 "digest": "sha384", 00:19:50.622 "dhgroup": "null" 00:19:50.622 } 00:19:50.622 } 00:19:50.622 ]' 00:19:50.622 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.622 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.622 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.880 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:50.881 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.881 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.881 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.881 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.138 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:51.139 01:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:51.705 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.705 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:51.705 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.705 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.705 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.705 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.705 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:51.705 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.963 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.221 00:19:52.221 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.221 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.221 01:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.480 { 00:19:52.480 "cntlid": 51, 00:19:52.480 "qid": 0, 00:19:52.480 "state": "enabled", 00:19:52.480 "thread": "nvmf_tgt_poll_group_000", 00:19:52.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:52.480 "listen_address": { 00:19:52.480 "trtype": "RDMA", 00:19:52.480 "adrfam": "IPv4", 00:19:52.480 "traddr": "192.168.100.8", 00:19:52.480 "trsvcid": "4420" 00:19:52.480 }, 00:19:52.480 "peer_address": { 00:19:52.480 "trtype": "RDMA", 00:19:52.480 "adrfam": "IPv4", 00:19:52.480 "traddr": "192.168.100.8", 00:19:52.480 "trsvcid": "50017" 00:19:52.480 }, 00:19:52.480 "auth": { 00:19:52.480 "state": "completed", 00:19:52.480 "digest": "sha384", 00:19:52.480 "dhgroup": "null" 00:19:52.480 } 00:19:52.480 } 00:19:52.480 ]' 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.480 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.738 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:52.738 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:53.304 01:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.562 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:53.562 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.562 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.562 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.562 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.562 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.562 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.821 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.079 00:19:54.079 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.079 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.079 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.338 { 00:19:54.338 "cntlid": 53, 00:19:54.338 "qid": 0, 00:19:54.338 "state": "enabled", 00:19:54.338 "thread": "nvmf_tgt_poll_group_000", 00:19:54.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:54.338 "listen_address": { 00:19:54.338 "trtype": "RDMA", 00:19:54.338 "adrfam": "IPv4", 00:19:54.338 "traddr": "192.168.100.8", 00:19:54.338 "trsvcid": "4420" 00:19:54.338 }, 00:19:54.338 "peer_address": { 00:19:54.338 "trtype": "RDMA", 00:19:54.338 "adrfam": "IPv4", 00:19:54.338 "traddr": "192.168.100.8", 00:19:54.338 "trsvcid": "46178" 00:19:54.338 }, 00:19:54.338 "auth": { 00:19:54.338 "state": "completed", 00:19:54.338 "digest": "sha384", 00:19:54.338 "dhgroup": "null" 00:19:54.338 } 00:19:54.338 } 00:19:54.338 ]' 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.338 01:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.596 01:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:54.596 01:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:19:55.162 01:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.421 01:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:55.421 01:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.421 01:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.421 01:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.421 01:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.421 01:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.421 01:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.421 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.680 00:19:55.680 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.680 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.680 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.939 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.939 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.939 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.939 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.939 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.939 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.939 { 00:19:55.939 "cntlid": 55, 00:19:55.939 "qid": 0, 00:19:55.939 "state": "enabled", 00:19:55.939 "thread": "nvmf_tgt_poll_group_000", 00:19:55.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:55.939 "listen_address": { 00:19:55.939 "trtype": "RDMA", 00:19:55.939 "adrfam": "IPv4", 00:19:55.939 "traddr": "192.168.100.8", 00:19:55.939 "trsvcid": "4420" 00:19:55.939 }, 00:19:55.939 "peer_address": { 00:19:55.939 "trtype": "RDMA", 00:19:55.939 "adrfam": "IPv4", 00:19:55.939 "traddr": "192.168.100.8", 00:19:55.939 "trsvcid": "47312" 00:19:55.939 }, 00:19:55.939 "auth": { 00:19:55.939 "state": "completed", 00:19:55.939 "digest": "sha384", 00:19:55.939 "dhgroup": "null" 00:19:55.939 } 00:19:55.939 } 00:19:55.939 ]' 00:19:55.939 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.939 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.939 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.197 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:56.197 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.197 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.197 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.197 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.456 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:56.456 01:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:19:57.023 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.023 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:57.023 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.023 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.023 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.023 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.023 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.023 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.023 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.281 01:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.540 00:19:57.540 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.540 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.540 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.799 { 00:19:57.799 "cntlid": 57, 00:19:57.799 "qid": 0, 00:19:57.799 "state": "enabled", 00:19:57.799 "thread": "nvmf_tgt_poll_group_000", 00:19:57.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:57.799 "listen_address": { 00:19:57.799 "trtype": "RDMA", 00:19:57.799 "adrfam": "IPv4", 00:19:57.799 "traddr": "192.168.100.8", 00:19:57.799 "trsvcid": "4420" 00:19:57.799 }, 00:19:57.799 "peer_address": { 00:19:57.799 "trtype": "RDMA", 00:19:57.799 "adrfam": "IPv4", 00:19:57.799 "traddr": "192.168.100.8", 00:19:57.799 "trsvcid": "38694" 00:19:57.799 }, 00:19:57.799 "auth": { 00:19:57.799 "state": "completed", 00:19:57.799 "digest": "sha384", 00:19:57.799 "dhgroup": "ffdhe2048" 00:19:57.799 } 00:19:57.799 } 00:19:57.799 ]' 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.799 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.058 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:58.058 01:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:19:58.625 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.883 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.141 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.141 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.141 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.141 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.141 00:19:59.400 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.400 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.400 01:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.400 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.400 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.400 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.400 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.400 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.400 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.400 { 00:19:59.400 "cntlid": 59, 00:19:59.400 "qid": 0, 00:19:59.400 "state": "enabled", 00:19:59.400 "thread": "nvmf_tgt_poll_group_000", 00:19:59.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:59.400 "listen_address": { 00:19:59.400 "trtype": "RDMA", 00:19:59.400 "adrfam": "IPv4", 00:19:59.400 "traddr": "192.168.100.8", 00:19:59.400 "trsvcid": "4420" 00:19:59.400 }, 00:19:59.400 "peer_address": { 00:19:59.400 "trtype": "RDMA", 00:19:59.400 "adrfam": "IPv4", 00:19:59.400 "traddr": "192.168.100.8", 00:19:59.400 "trsvcid": "58309" 00:19:59.400 }, 00:19:59.400 "auth": { 00:19:59.400 "state": "completed", 00:19:59.400 "digest": "sha384", 00:19:59.400 "dhgroup": "ffdhe2048" 00:19:59.400 } 00:19:59.400 } 00:19:59.400 ]' 00:19:59.400 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.659 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.659 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.659 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:59.659 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.659 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.659 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.659 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.917 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:19:59.917 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:00.484 01:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.484 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:00.484 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.484 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.484 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.484 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.484 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.484 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.743 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.001 00:20:01.001 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.001 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.001 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.259 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.259 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.259 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.259 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.259 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.259 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.259 { 00:20:01.259 "cntlid": 61, 00:20:01.259 "qid": 0, 00:20:01.259 "state": "enabled", 00:20:01.259 "thread": "nvmf_tgt_poll_group_000", 00:20:01.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:01.259 "listen_address": { 00:20:01.259 "trtype": "RDMA", 00:20:01.259 "adrfam": "IPv4", 00:20:01.259 "traddr": "192.168.100.8", 00:20:01.259 "trsvcid": "4420" 00:20:01.259 }, 00:20:01.259 "peer_address": { 00:20:01.259 "trtype": "RDMA", 00:20:01.259 "adrfam": "IPv4", 00:20:01.259 "traddr": "192.168.100.8", 00:20:01.259 "trsvcid": "37262" 00:20:01.259 }, 00:20:01.259 "auth": { 00:20:01.259 "state": "completed", 00:20:01.259 "digest": "sha384", 00:20:01.259 "dhgroup": "ffdhe2048" 00:20:01.259 } 00:20:01.259 } 00:20:01.259 ]' 00:20:01.259 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.259 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.259 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.259 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.259 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.518 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.518 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.518 01:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.518 01:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:01.518 01:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:02.084 01:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.343 01:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:02.343 01:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.343 01:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.343 01:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.343 01:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.343 01:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.343 01:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.601 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.859 00:20:02.859 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.859 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.859 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.118 { 00:20:03.118 "cntlid": 63, 00:20:03.118 "qid": 0, 00:20:03.118 "state": "enabled", 00:20:03.118 "thread": "nvmf_tgt_poll_group_000", 00:20:03.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:03.118 "listen_address": { 00:20:03.118 "trtype": "RDMA", 00:20:03.118 "adrfam": "IPv4", 00:20:03.118 "traddr": "192.168.100.8", 00:20:03.118 "trsvcid": "4420" 00:20:03.118 }, 00:20:03.118 "peer_address": { 00:20:03.118 "trtype": "RDMA", 00:20:03.118 "adrfam": "IPv4", 00:20:03.118 "traddr": "192.168.100.8", 00:20:03.118 "trsvcid": "36451" 00:20:03.118 }, 00:20:03.118 "auth": { 00:20:03.118 "state": "completed", 00:20:03.118 "digest": "sha384", 00:20:03.118 "dhgroup": "ffdhe2048" 00:20:03.118 } 00:20:03.118 } 00:20:03.118 ]' 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.118 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.376 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:03.376 01:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:03.942 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.942 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:03.942 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.942 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.201 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.201 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.201 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.201 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.202 01:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.460 00:20:04.460 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.460 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.461 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.720 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.720 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.720 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.720 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.720 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.720 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.720 { 00:20:04.720 "cntlid": 65, 00:20:04.720 "qid": 0, 00:20:04.720 "state": "enabled", 00:20:04.720 "thread": "nvmf_tgt_poll_group_000", 00:20:04.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:04.720 "listen_address": { 00:20:04.720 "trtype": "RDMA", 00:20:04.720 "adrfam": "IPv4", 00:20:04.720 "traddr": "192.168.100.8", 00:20:04.720 "trsvcid": "4420" 00:20:04.720 }, 00:20:04.720 "peer_address": { 00:20:04.720 "trtype": "RDMA", 00:20:04.720 "adrfam": "IPv4", 00:20:04.720 "traddr": "192.168.100.8", 00:20:04.720 "trsvcid": "56601" 00:20:04.720 }, 00:20:04.720 "auth": { 00:20:04.720 "state": "completed", 00:20:04.720 "digest": "sha384", 00:20:04.720 "dhgroup": "ffdhe3072" 00:20:04.720 } 00:20:04.720 } 00:20:04.720 ]' 00:20:04.720 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.720 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.720 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.720 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.720 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.979 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.979 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.979 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.979 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:04.979 01:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:05.546 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.805 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:05.805 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.805 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.805 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.805 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.805 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.805 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.065 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.324 00:20:06.324 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.324 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.324 01:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.583 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.583 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.583 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.583 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.583 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.583 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.583 { 00:20:06.583 "cntlid": 67, 00:20:06.583 "qid": 0, 00:20:06.583 "state": "enabled", 00:20:06.583 "thread": "nvmf_tgt_poll_group_000", 00:20:06.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:06.583 "listen_address": { 00:20:06.583 "trtype": "RDMA", 00:20:06.583 "adrfam": "IPv4", 00:20:06.583 "traddr": "192.168.100.8", 00:20:06.583 "trsvcid": "4420" 00:20:06.583 }, 00:20:06.583 "peer_address": { 00:20:06.583 "trtype": "RDMA", 00:20:06.583 "adrfam": "IPv4", 00:20:06.583 "traddr": "192.168.100.8", 00:20:06.583 "trsvcid": "46216" 00:20:06.583 }, 00:20:06.583 "auth": { 00:20:06.583 "state": "completed", 00:20:06.583 "digest": "sha384", 00:20:06.584 "dhgroup": "ffdhe3072" 00:20:06.584 } 00:20:06.584 } 00:20:06.584 ]' 00:20:06.584 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.584 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.584 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.584 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.584 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.584 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.584 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.584 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.843 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:06.843 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:07.410 01:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.669 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.928 00:20:08.187 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.187 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.187 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.187 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.187 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.188 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.188 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.188 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.188 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.188 { 00:20:08.188 "cntlid": 69, 00:20:08.188 "qid": 0, 00:20:08.188 "state": "enabled", 00:20:08.188 "thread": "nvmf_tgt_poll_group_000", 00:20:08.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:08.188 "listen_address": { 00:20:08.188 "trtype": "RDMA", 00:20:08.188 "adrfam": "IPv4", 00:20:08.188 "traddr": "192.168.100.8", 00:20:08.188 "trsvcid": "4420" 00:20:08.188 }, 00:20:08.188 "peer_address": { 00:20:08.188 "trtype": "RDMA", 00:20:08.188 "adrfam": "IPv4", 00:20:08.188 "traddr": "192.168.100.8", 00:20:08.188 "trsvcid": "38216" 00:20:08.188 }, 00:20:08.188 "auth": { 00:20:08.188 "state": "completed", 00:20:08.188 "digest": "sha384", 00:20:08.188 "dhgroup": "ffdhe3072" 00:20:08.188 } 00:20:08.188 } 00:20:08.188 ]' 00:20:08.188 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.188 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.188 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.447 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.447 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.447 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.447 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.447 01:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.706 01:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:08.706 01:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:09.273 01:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.273 01:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:09.273 01:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.273 01:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.273 01:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.273 01:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.274 01:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.274 01:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.532 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.791 00:20:09.791 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.791 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.791 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.050 { 00:20:10.050 "cntlid": 71, 00:20:10.050 "qid": 0, 00:20:10.050 "state": "enabled", 00:20:10.050 "thread": "nvmf_tgt_poll_group_000", 00:20:10.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:10.050 "listen_address": { 00:20:10.050 "trtype": "RDMA", 00:20:10.050 "adrfam": "IPv4", 00:20:10.050 "traddr": "192.168.100.8", 00:20:10.050 "trsvcid": "4420" 00:20:10.050 }, 00:20:10.050 "peer_address": { 00:20:10.050 "trtype": "RDMA", 00:20:10.050 "adrfam": "IPv4", 00:20:10.050 "traddr": "192.168.100.8", 00:20:10.050 "trsvcid": "34302" 00:20:10.050 }, 00:20:10.050 "auth": { 00:20:10.050 "state": "completed", 00:20:10.050 "digest": "sha384", 00:20:10.050 "dhgroup": "ffdhe3072" 00:20:10.050 } 00:20:10.050 } 00:20:10.050 ]' 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.050 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.309 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:10.309 01:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:10.876 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.136 01:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.704 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.704 { 00:20:11.704 "cntlid": 73, 00:20:11.704 "qid": 0, 00:20:11.704 "state": "enabled", 00:20:11.704 "thread": "nvmf_tgt_poll_group_000", 00:20:11.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:11.704 "listen_address": { 00:20:11.704 "trtype": "RDMA", 00:20:11.704 "adrfam": "IPv4", 00:20:11.704 "traddr": "192.168.100.8", 00:20:11.704 "trsvcid": "4420" 00:20:11.704 }, 00:20:11.704 "peer_address": { 00:20:11.704 "trtype": "RDMA", 00:20:11.704 "adrfam": "IPv4", 00:20:11.704 "traddr": "192.168.100.8", 00:20:11.704 "trsvcid": "58353" 00:20:11.704 }, 00:20:11.704 "auth": { 00:20:11.704 "state": "completed", 00:20:11.704 "digest": "sha384", 00:20:11.704 "dhgroup": "ffdhe4096" 00:20:11.704 } 00:20:11.704 } 00:20:11.704 ]' 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.704 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.963 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.963 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.963 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.963 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.963 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.222 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:12.222 01:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:12.791 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.791 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:12.791 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.791 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.791 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.791 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.791 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:12.791 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:13.050 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:13.050 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.050 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.050 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:13.051 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.051 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.051 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.051 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.051 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.051 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.051 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.051 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.051 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.311 00:20:13.311 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.311 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.311 01:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.569 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.569 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.569 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.569 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.569 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.569 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.569 { 00:20:13.569 "cntlid": 75, 00:20:13.569 "qid": 0, 00:20:13.569 "state": "enabled", 00:20:13.569 "thread": "nvmf_tgt_poll_group_000", 00:20:13.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:13.569 "listen_address": { 00:20:13.569 "trtype": "RDMA", 00:20:13.569 "adrfam": "IPv4", 00:20:13.569 "traddr": "192.168.100.8", 00:20:13.569 "trsvcid": "4420" 00:20:13.569 }, 00:20:13.569 "peer_address": { 00:20:13.569 "trtype": "RDMA", 00:20:13.569 "adrfam": "IPv4", 00:20:13.569 "traddr": "192.168.100.8", 00:20:13.569 "trsvcid": "39875" 00:20:13.569 }, 00:20:13.569 "auth": { 00:20:13.569 "state": "completed", 00:20:13.569 "digest": "sha384", 00:20:13.569 "dhgroup": "ffdhe4096" 00:20:13.569 } 00:20:13.569 } 00:20:13.569 ]' 00:20:13.569 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.569 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.569 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.569 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.569 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.828 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.828 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.828 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.828 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:13.828 01:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:14.395 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.654 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:14.654 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.654 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.654 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.654 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.654 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:14.654 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.913 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.172 00:20:15.172 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.172 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.172 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.431 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.431 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.431 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.431 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.431 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.431 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.431 { 00:20:15.431 "cntlid": 77, 00:20:15.431 "qid": 0, 00:20:15.431 "state": "enabled", 00:20:15.431 "thread": "nvmf_tgt_poll_group_000", 00:20:15.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:15.431 "listen_address": { 00:20:15.431 "trtype": "RDMA", 00:20:15.431 "adrfam": "IPv4", 00:20:15.431 "traddr": "192.168.100.8", 00:20:15.431 "trsvcid": "4420" 00:20:15.431 }, 00:20:15.431 "peer_address": { 00:20:15.431 "trtype": "RDMA", 00:20:15.431 "adrfam": "IPv4", 00:20:15.431 "traddr": "192.168.100.8", 00:20:15.431 "trsvcid": "42207" 00:20:15.431 }, 00:20:15.431 "auth": { 00:20:15.431 "state": "completed", 00:20:15.431 "digest": "sha384", 00:20:15.431 "dhgroup": "ffdhe4096" 00:20:15.431 } 00:20:15.431 } 00:20:15.431 ]' 00:20:15.431 01:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.431 01:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.431 01:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.431 01:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.431 01:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.431 01:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.431 01:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.431 01:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.690 01:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:15.690 01:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:16.258 01:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.517 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:16.517 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.517 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.517 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.517 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.517 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.517 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.776 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.777 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.035 00:20:17.035 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.035 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.035 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.294 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.294 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.294 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.294 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.294 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.294 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.294 { 00:20:17.294 "cntlid": 79, 00:20:17.294 "qid": 0, 00:20:17.294 "state": "enabled", 00:20:17.294 "thread": "nvmf_tgt_poll_group_000", 00:20:17.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:17.294 "listen_address": { 00:20:17.294 "trtype": "RDMA", 00:20:17.294 "adrfam": "IPv4", 00:20:17.294 "traddr": "192.168.100.8", 00:20:17.294 "trsvcid": "4420" 00:20:17.294 }, 00:20:17.294 "peer_address": { 00:20:17.294 "trtype": "RDMA", 00:20:17.294 "adrfam": "IPv4", 00:20:17.294 "traddr": "192.168.100.8", 00:20:17.295 "trsvcid": "40674" 00:20:17.295 }, 00:20:17.295 "auth": { 00:20:17.295 "state": "completed", 00:20:17.295 "digest": "sha384", 00:20:17.295 "dhgroup": "ffdhe4096" 00:20:17.295 } 00:20:17.295 } 00:20:17.295 ]' 00:20:17.295 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.295 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.295 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.295 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:17.295 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.295 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.295 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.295 01:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.553 01:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:17.553 01:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:18.120 01:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.380 01:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:18.380 01:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.380 01:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.380 01:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.380 01:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.380 01:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.380 01:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:18.380 01:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.380 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.948 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.948 { 00:20:18.948 "cntlid": 81, 00:20:18.948 "qid": 0, 00:20:18.948 "state": "enabled", 00:20:18.948 "thread": "nvmf_tgt_poll_group_000", 00:20:18.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:18.948 "listen_address": { 00:20:18.948 "trtype": "RDMA", 00:20:18.948 "adrfam": "IPv4", 00:20:18.948 "traddr": "192.168.100.8", 00:20:18.948 "trsvcid": "4420" 00:20:18.948 }, 00:20:18.948 "peer_address": { 00:20:18.948 "trtype": "RDMA", 00:20:18.948 "adrfam": "IPv4", 00:20:18.948 "traddr": "192.168.100.8", 00:20:18.948 "trsvcid": "47421" 00:20:18.948 }, 00:20:18.948 "auth": { 00:20:18.948 "state": "completed", 00:20:18.948 "digest": "sha384", 00:20:18.948 "dhgroup": "ffdhe6144" 00:20:18.948 } 00:20:18.948 } 00:20:18.948 ]' 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.948 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.207 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.207 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.207 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.207 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.207 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.466 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:19.466 01:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:20.034 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.034 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:20.035 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.035 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.035 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.035 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.035 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:20.035 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.294 01:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.553 00:20:20.553 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.553 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.553 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.813 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.813 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.813 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.813 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.813 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.813 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.813 { 00:20:20.813 "cntlid": 83, 00:20:20.813 "qid": 0, 00:20:20.813 "state": "enabled", 00:20:20.813 "thread": "nvmf_tgt_poll_group_000", 00:20:20.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:20.813 "listen_address": { 00:20:20.813 "trtype": "RDMA", 00:20:20.813 "adrfam": "IPv4", 00:20:20.813 "traddr": "192.168.100.8", 00:20:20.813 "trsvcid": "4420" 00:20:20.813 }, 00:20:20.813 "peer_address": { 00:20:20.813 "trtype": "RDMA", 00:20:20.813 "adrfam": "IPv4", 00:20:20.813 "traddr": "192.168.100.8", 00:20:20.813 "trsvcid": "51061" 00:20:20.813 }, 00:20:20.813 "auth": { 00:20:20.813 "state": "completed", 00:20:20.813 "digest": "sha384", 00:20:20.813 "dhgroup": "ffdhe6144" 00:20:20.813 } 00:20:20.813 } 00:20:20.813 ]' 00:20:20.813 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.813 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.813 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.072 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.072 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.072 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.072 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.072 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.330 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:21.330 01:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:21.897 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.897 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:21.897 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.897 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.897 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.897 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.897 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:21.897 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.157 01:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.416 00:20:22.416 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.416 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.416 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.675 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.675 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.675 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.675 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.675 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.675 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.675 { 00:20:22.675 "cntlid": 85, 00:20:22.675 "qid": 0, 00:20:22.675 "state": "enabled", 00:20:22.675 "thread": "nvmf_tgt_poll_group_000", 00:20:22.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:22.675 "listen_address": { 00:20:22.675 "trtype": "RDMA", 00:20:22.675 "adrfam": "IPv4", 00:20:22.675 "traddr": "192.168.100.8", 00:20:22.675 "trsvcid": "4420" 00:20:22.675 }, 00:20:22.675 "peer_address": { 00:20:22.675 "trtype": "RDMA", 00:20:22.675 "adrfam": "IPv4", 00:20:22.675 "traddr": "192.168.100.8", 00:20:22.675 "trsvcid": "59110" 00:20:22.675 }, 00:20:22.675 "auth": { 00:20:22.675 "state": "completed", 00:20:22.675 "digest": "sha384", 00:20:22.675 "dhgroup": "ffdhe6144" 00:20:22.675 } 00:20:22.675 } 00:20:22.675 ]' 00:20:22.675 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.675 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.675 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.675 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:22.675 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.934 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.934 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.934 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.934 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:22.934 01:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:23.501 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.759 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:23.759 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.759 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.759 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.759 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.759 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:23.759 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.019 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.278 00:20:24.278 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.278 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.278 01:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.538 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.538 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.538 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.538 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.538 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.538 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.538 { 00:20:24.538 "cntlid": 87, 00:20:24.538 "qid": 0, 00:20:24.538 "state": "enabled", 00:20:24.538 "thread": "nvmf_tgt_poll_group_000", 00:20:24.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:24.538 "listen_address": { 00:20:24.538 "trtype": "RDMA", 00:20:24.538 "adrfam": "IPv4", 00:20:24.538 "traddr": "192.168.100.8", 00:20:24.538 "trsvcid": "4420" 00:20:24.538 }, 00:20:24.538 "peer_address": { 00:20:24.538 "trtype": "RDMA", 00:20:24.538 "adrfam": "IPv4", 00:20:24.538 "traddr": "192.168.100.8", 00:20:24.538 "trsvcid": "53309" 00:20:24.538 }, 00:20:24.538 "auth": { 00:20:24.538 "state": "completed", 00:20:24.538 "digest": "sha384", 00:20:24.538 "dhgroup": "ffdhe6144" 00:20:24.538 } 00:20:24.538 } 00:20:24.538 ]' 00:20:24.538 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.538 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.538 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.538 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:24.538 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.798 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.798 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.798 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.798 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:24.798 01:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:25.366 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.625 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:25.625 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.625 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.625 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.625 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.625 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.625 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:25.625 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.884 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.452 00:20:26.452 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.452 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.452 01:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.452 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.452 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.452 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.452 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.452 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.452 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.452 { 00:20:26.452 "cntlid": 89, 00:20:26.452 "qid": 0, 00:20:26.452 "state": "enabled", 00:20:26.452 "thread": "nvmf_tgt_poll_group_000", 00:20:26.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:26.452 "listen_address": { 00:20:26.452 "trtype": "RDMA", 00:20:26.452 "adrfam": "IPv4", 00:20:26.452 "traddr": "192.168.100.8", 00:20:26.452 "trsvcid": "4420" 00:20:26.452 }, 00:20:26.452 "peer_address": { 00:20:26.452 "trtype": "RDMA", 00:20:26.452 "adrfam": "IPv4", 00:20:26.452 "traddr": "192.168.100.8", 00:20:26.452 "trsvcid": "39421" 00:20:26.452 }, 00:20:26.452 "auth": { 00:20:26.452 "state": "completed", 00:20:26.452 "digest": "sha384", 00:20:26.452 "dhgroup": "ffdhe8192" 00:20:26.452 } 00:20:26.452 } 00:20:26.452 ]' 00:20:26.452 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.452 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.452 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.711 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.711 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.711 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.711 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.711 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.970 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:26.970 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:27.537 01:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.537 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:27.537 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.537 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.537 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.537 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.537 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:27.537 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.795 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.397 00:20:28.397 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.397 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.397 01:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.397 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.397 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.397 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.397 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.397 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.397 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.397 { 00:20:28.397 "cntlid": 91, 00:20:28.397 "qid": 0, 00:20:28.397 "state": "enabled", 00:20:28.397 "thread": "nvmf_tgt_poll_group_000", 00:20:28.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:28.397 "listen_address": { 00:20:28.397 "trtype": "RDMA", 00:20:28.397 "adrfam": "IPv4", 00:20:28.397 "traddr": "192.168.100.8", 00:20:28.397 "trsvcid": "4420" 00:20:28.397 }, 00:20:28.397 "peer_address": { 00:20:28.397 "trtype": "RDMA", 00:20:28.397 "adrfam": "IPv4", 00:20:28.397 "traddr": "192.168.100.8", 00:20:28.397 "trsvcid": "37882" 00:20:28.397 }, 00:20:28.397 "auth": { 00:20:28.397 "state": "completed", 00:20:28.397 "digest": "sha384", 00:20:28.397 "dhgroup": "ffdhe8192" 00:20:28.397 } 00:20:28.397 } 00:20:28.397 ]' 00:20:28.397 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.397 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.656 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.656 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.656 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.656 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.656 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.656 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.915 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:28.915 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:29.482 01:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.483 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:29.483 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.483 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.483 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.483 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.483 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.483 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.741 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.308 00:20:30.308 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.308 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.308 01:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.566 { 00:20:30.566 "cntlid": 93, 00:20:30.566 "qid": 0, 00:20:30.566 "state": "enabled", 00:20:30.566 "thread": "nvmf_tgt_poll_group_000", 00:20:30.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:30.566 "listen_address": { 00:20:30.566 "trtype": "RDMA", 00:20:30.566 "adrfam": "IPv4", 00:20:30.566 "traddr": "192.168.100.8", 00:20:30.566 "trsvcid": "4420" 00:20:30.566 }, 00:20:30.566 "peer_address": { 00:20:30.566 "trtype": "RDMA", 00:20:30.566 "adrfam": "IPv4", 00:20:30.566 "traddr": "192.168.100.8", 00:20:30.566 "trsvcid": "42974" 00:20:30.566 }, 00:20:30.566 "auth": { 00:20:30.566 "state": "completed", 00:20:30.566 "digest": "sha384", 00:20:30.566 "dhgroup": "ffdhe8192" 00:20:30.566 } 00:20:30.566 } 00:20:30.566 ]' 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.566 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.825 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:30.825 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:31.392 01:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.651 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.910 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.910 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.910 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.910 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.169 00:20:32.169 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.169 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.169 01:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.428 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.428 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.428 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.428 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.428 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.428 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.428 { 00:20:32.428 "cntlid": 95, 00:20:32.428 "qid": 0, 00:20:32.428 "state": "enabled", 00:20:32.428 "thread": "nvmf_tgt_poll_group_000", 00:20:32.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:32.428 "listen_address": { 00:20:32.428 "trtype": "RDMA", 00:20:32.428 "adrfam": "IPv4", 00:20:32.428 "traddr": "192.168.100.8", 00:20:32.428 "trsvcid": "4420" 00:20:32.428 }, 00:20:32.428 "peer_address": { 00:20:32.428 "trtype": "RDMA", 00:20:32.428 "adrfam": "IPv4", 00:20:32.428 "traddr": "192.168.100.8", 00:20:32.428 "trsvcid": "43176" 00:20:32.428 }, 00:20:32.428 "auth": { 00:20:32.428 "state": "completed", 00:20:32.428 "digest": "sha384", 00:20:32.428 "dhgroup": "ffdhe8192" 00:20:32.428 } 00:20:32.428 } 00:20:32.428 ]' 00:20:32.428 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.428 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.428 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.689 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.690 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.690 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.690 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.690 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.952 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:32.952 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:33.520 01:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.520 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:33.520 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.520 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.520 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.520 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:33.520 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.520 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.520 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.520 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.779 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.038 00:20:34.038 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.038 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.038 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.298 { 00:20:34.298 "cntlid": 97, 00:20:34.298 "qid": 0, 00:20:34.298 "state": "enabled", 00:20:34.298 "thread": "nvmf_tgt_poll_group_000", 00:20:34.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:34.298 "listen_address": { 00:20:34.298 "trtype": "RDMA", 00:20:34.298 "adrfam": "IPv4", 00:20:34.298 "traddr": "192.168.100.8", 00:20:34.298 "trsvcid": "4420" 00:20:34.298 }, 00:20:34.298 "peer_address": { 00:20:34.298 "trtype": "RDMA", 00:20:34.298 "adrfam": "IPv4", 00:20:34.298 "traddr": "192.168.100.8", 00:20:34.298 "trsvcid": "56525" 00:20:34.298 }, 00:20:34.298 "auth": { 00:20:34.298 "state": "completed", 00:20:34.298 "digest": "sha512", 00:20:34.298 "dhgroup": "null" 00:20:34.298 } 00:20:34.298 } 00:20:34.298 ]' 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.298 01:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.557 01:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:34.557 01:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:35.125 01:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.385 01:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:35.385 01:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.385 01:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.385 01:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.385 01:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.385 01:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.385 01:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.385 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.644 00:20:35.644 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.644 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.644 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.903 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.903 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.903 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.903 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.903 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.903 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.903 { 00:20:35.903 "cntlid": 99, 00:20:35.903 "qid": 0, 00:20:35.903 "state": "enabled", 00:20:35.903 "thread": "nvmf_tgt_poll_group_000", 00:20:35.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:35.903 "listen_address": { 00:20:35.903 "trtype": "RDMA", 00:20:35.903 "adrfam": "IPv4", 00:20:35.903 "traddr": "192.168.100.8", 00:20:35.903 "trsvcid": "4420" 00:20:35.903 }, 00:20:35.903 "peer_address": { 00:20:35.903 "trtype": "RDMA", 00:20:35.903 "adrfam": "IPv4", 00:20:35.903 "traddr": "192.168.100.8", 00:20:35.903 "trsvcid": "39410" 00:20:35.903 }, 00:20:35.903 "auth": { 00:20:35.903 "state": "completed", 00:20:35.903 "digest": "sha512", 00:20:35.903 "dhgroup": "null" 00:20:35.903 } 00:20:35.903 } 00:20:35.903 ]' 00:20:35.903 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.903 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.903 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.162 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:36.162 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.162 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.162 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.162 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.421 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:36.421 01:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:36.989 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.989 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:36.989 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.989 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.989 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.989 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.989 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:36.989 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.248 01:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.506 00:20:37.506 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.506 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.507 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.766 { 00:20:37.766 "cntlid": 101, 00:20:37.766 "qid": 0, 00:20:37.766 "state": "enabled", 00:20:37.766 "thread": "nvmf_tgt_poll_group_000", 00:20:37.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:37.766 "listen_address": { 00:20:37.766 "trtype": "RDMA", 00:20:37.766 "adrfam": "IPv4", 00:20:37.766 "traddr": "192.168.100.8", 00:20:37.766 "trsvcid": "4420" 00:20:37.766 }, 00:20:37.766 "peer_address": { 00:20:37.766 "trtype": "RDMA", 00:20:37.766 "adrfam": "IPv4", 00:20:37.766 "traddr": "192.168.100.8", 00:20:37.766 "trsvcid": "56590" 00:20:37.766 }, 00:20:37.766 "auth": { 00:20:37.766 "state": "completed", 00:20:37.766 "digest": "sha512", 00:20:37.766 "dhgroup": "null" 00:20:37.766 } 00:20:37.766 } 00:20:37.766 ]' 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.766 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.024 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:38.025 01:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:38.591 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.850 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:38.850 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.850 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.850 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.850 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.850 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:38.850 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:39.108 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:39.108 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.109 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:39.109 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:39.109 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:39.109 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.109 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:20:39.109 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.109 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.109 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.109 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:39.109 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.109 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.367 00:20:39.367 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.367 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.367 01:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.367 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.367 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.367 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.367 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.367 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.367 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.367 { 00:20:39.367 "cntlid": 103, 00:20:39.367 "qid": 0, 00:20:39.367 "state": "enabled", 00:20:39.367 "thread": "nvmf_tgt_poll_group_000", 00:20:39.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:39.367 "listen_address": { 00:20:39.368 "trtype": "RDMA", 00:20:39.368 "adrfam": "IPv4", 00:20:39.368 "traddr": "192.168.100.8", 00:20:39.368 "trsvcid": "4420" 00:20:39.368 }, 00:20:39.368 "peer_address": { 00:20:39.368 "trtype": "RDMA", 00:20:39.368 "adrfam": "IPv4", 00:20:39.368 "traddr": "192.168.100.8", 00:20:39.368 "trsvcid": "47149" 00:20:39.368 }, 00:20:39.368 "auth": { 00:20:39.368 "state": "completed", 00:20:39.368 "digest": "sha512", 00:20:39.368 "dhgroup": "null" 00:20:39.368 } 00:20:39.368 } 00:20:39.368 ]' 00:20:39.368 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.626 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.626 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.626 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:39.626 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.626 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.626 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.626 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.886 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:39.886 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:40.453 01:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.453 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:40.454 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.454 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.454 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.454 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.454 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.454 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.454 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.713 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.972 00:20:40.972 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.972 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.972 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.231 { 00:20:41.231 "cntlid": 105, 00:20:41.231 "qid": 0, 00:20:41.231 "state": "enabled", 00:20:41.231 "thread": "nvmf_tgt_poll_group_000", 00:20:41.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:41.231 "listen_address": { 00:20:41.231 "trtype": "RDMA", 00:20:41.231 "adrfam": "IPv4", 00:20:41.231 "traddr": "192.168.100.8", 00:20:41.231 "trsvcid": "4420" 00:20:41.231 }, 00:20:41.231 "peer_address": { 00:20:41.231 "trtype": "RDMA", 00:20:41.231 "adrfam": "IPv4", 00:20:41.231 "traddr": "192.168.100.8", 00:20:41.231 "trsvcid": "40683" 00:20:41.231 }, 00:20:41.231 "auth": { 00:20:41.231 "state": "completed", 00:20:41.231 "digest": "sha512", 00:20:41.231 "dhgroup": "ffdhe2048" 00:20:41.231 } 00:20:41.231 } 00:20:41.231 ]' 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.231 01:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.490 01:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:41.490 01:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:42.058 01:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.317 01:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:42.317 01:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.317 01:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.317 01:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.317 01:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.317 01:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:42.317 01:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:42.575 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.576 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.833 00:20:42.833 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.833 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.833 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.833 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.833 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.833 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.833 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.833 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.834 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.834 { 00:20:42.834 "cntlid": 107, 00:20:42.834 "qid": 0, 00:20:42.834 "state": "enabled", 00:20:42.834 "thread": "nvmf_tgt_poll_group_000", 00:20:42.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:42.834 "listen_address": { 00:20:42.834 "trtype": "RDMA", 00:20:42.834 "adrfam": "IPv4", 00:20:42.834 "traddr": "192.168.100.8", 00:20:42.834 "trsvcid": "4420" 00:20:42.834 }, 00:20:42.834 "peer_address": { 00:20:42.834 "trtype": "RDMA", 00:20:42.834 "adrfam": "IPv4", 00:20:42.834 "traddr": "192.168.100.8", 00:20:42.834 "trsvcid": "54796" 00:20:42.834 }, 00:20:42.834 "auth": { 00:20:42.834 "state": "completed", 00:20:42.834 "digest": "sha512", 00:20:42.834 "dhgroup": "ffdhe2048" 00:20:42.834 } 00:20:42.834 } 00:20:42.834 ]' 00:20:42.834 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.092 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.092 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.092 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:43.092 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.092 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.092 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.092 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.352 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:43.352 01:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:43.920 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.920 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:43.920 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.920 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.920 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.920 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.920 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.920 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.179 01:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.438 00:20:44.438 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.438 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.438 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.697 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.697 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.697 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.697 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.697 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.697 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.697 { 00:20:44.697 "cntlid": 109, 00:20:44.697 "qid": 0, 00:20:44.697 "state": "enabled", 00:20:44.697 "thread": "nvmf_tgt_poll_group_000", 00:20:44.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:44.697 "listen_address": { 00:20:44.697 "trtype": "RDMA", 00:20:44.697 "adrfam": "IPv4", 00:20:44.697 "traddr": "192.168.100.8", 00:20:44.697 "trsvcid": "4420" 00:20:44.697 }, 00:20:44.697 "peer_address": { 00:20:44.697 "trtype": "RDMA", 00:20:44.697 "adrfam": "IPv4", 00:20:44.697 "traddr": "192.168.100.8", 00:20:44.697 "trsvcid": "59436" 00:20:44.697 }, 00:20:44.697 "auth": { 00:20:44.697 "state": "completed", 00:20:44.697 "digest": "sha512", 00:20:44.697 "dhgroup": "ffdhe2048" 00:20:44.697 } 00:20:44.697 } 00:20:44.697 ]' 00:20:44.697 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.697 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.697 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.697 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:44.697 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.956 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.956 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.956 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.956 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:44.956 01:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:45.524 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.783 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:45.783 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.783 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.783 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.783 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.783 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.783 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.042 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:46.042 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.043 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:46.043 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:46.043 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.043 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.043 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:20:46.043 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.043 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.043 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.043 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.043 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.043 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.302 00:20:46.302 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.302 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.302 01:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.560 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.560 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.561 { 00:20:46.561 "cntlid": 111, 00:20:46.561 "qid": 0, 00:20:46.561 "state": "enabled", 00:20:46.561 "thread": "nvmf_tgt_poll_group_000", 00:20:46.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:46.561 "listen_address": { 00:20:46.561 "trtype": "RDMA", 00:20:46.561 "adrfam": "IPv4", 00:20:46.561 "traddr": "192.168.100.8", 00:20:46.561 "trsvcid": "4420" 00:20:46.561 }, 00:20:46.561 "peer_address": { 00:20:46.561 "trtype": "RDMA", 00:20:46.561 "adrfam": "IPv4", 00:20:46.561 "traddr": "192.168.100.8", 00:20:46.561 "trsvcid": "54657" 00:20:46.561 }, 00:20:46.561 "auth": { 00:20:46.561 "state": "completed", 00:20:46.561 "digest": "sha512", 00:20:46.561 "dhgroup": "ffdhe2048" 00:20:46.561 } 00:20:46.561 } 00:20:46.561 ]' 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.561 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.820 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:46.820 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:47.387 01:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.387 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:47.387 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.387 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.387 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.387 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.387 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.387 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:47.387 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.647 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.905 00:20:47.905 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.905 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.905 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.163 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.164 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.164 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.164 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.164 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.164 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.164 { 00:20:48.164 "cntlid": 113, 00:20:48.164 "qid": 0, 00:20:48.164 "state": "enabled", 00:20:48.164 "thread": "nvmf_tgt_poll_group_000", 00:20:48.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:48.164 "listen_address": { 00:20:48.164 "trtype": "RDMA", 00:20:48.164 "adrfam": "IPv4", 00:20:48.164 "traddr": "192.168.100.8", 00:20:48.164 "trsvcid": "4420" 00:20:48.164 }, 00:20:48.164 "peer_address": { 00:20:48.164 "trtype": "RDMA", 00:20:48.164 "adrfam": "IPv4", 00:20:48.164 "traddr": "192.168.100.8", 00:20:48.164 "trsvcid": "54440" 00:20:48.164 }, 00:20:48.164 "auth": { 00:20:48.164 "state": "completed", 00:20:48.164 "digest": "sha512", 00:20:48.164 "dhgroup": "ffdhe3072" 00:20:48.164 } 00:20:48.164 } 00:20:48.164 ]' 00:20:48.164 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.164 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.164 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.423 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:48.423 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.423 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.423 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.423 01:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.423 01:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:48.423 01:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:49.358 01:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.358 01:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:49.358 01:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.358 01:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.358 01:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.358 01:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.358 01:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:49.358 01:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.358 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.926 00:20:49.926 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.926 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.926 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.926 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.926 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.927 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.927 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.927 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.927 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.927 { 00:20:49.927 "cntlid": 115, 00:20:49.927 "qid": 0, 00:20:49.927 "state": "enabled", 00:20:49.927 "thread": "nvmf_tgt_poll_group_000", 00:20:49.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:49.927 "listen_address": { 00:20:49.927 "trtype": "RDMA", 00:20:49.927 "adrfam": "IPv4", 00:20:49.927 "traddr": "192.168.100.8", 00:20:49.927 "trsvcid": "4420" 00:20:49.927 }, 00:20:49.927 "peer_address": { 00:20:49.927 "trtype": "RDMA", 00:20:49.927 "adrfam": "IPv4", 00:20:49.927 "traddr": "192.168.100.8", 00:20:49.927 "trsvcid": "56365" 00:20:49.927 }, 00:20:49.927 "auth": { 00:20:49.927 "state": "completed", 00:20:49.927 "digest": "sha512", 00:20:49.927 "dhgroup": "ffdhe3072" 00:20:49.927 } 00:20:49.927 } 00:20:49.927 ]' 00:20:49.927 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.927 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.927 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.185 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:50.185 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.185 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.185 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.185 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.443 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:50.443 01:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:51.010 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.010 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:51.010 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.010 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.010 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.010 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.010 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:51.010 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.268 01:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.527 00:20:51.527 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.527 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.527 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.786 { 00:20:51.786 "cntlid": 117, 00:20:51.786 "qid": 0, 00:20:51.786 "state": "enabled", 00:20:51.786 "thread": "nvmf_tgt_poll_group_000", 00:20:51.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:51.786 "listen_address": { 00:20:51.786 "trtype": "RDMA", 00:20:51.786 "adrfam": "IPv4", 00:20:51.786 "traddr": "192.168.100.8", 00:20:51.786 "trsvcid": "4420" 00:20:51.786 }, 00:20:51.786 "peer_address": { 00:20:51.786 "trtype": "RDMA", 00:20:51.786 "adrfam": "IPv4", 00:20:51.786 "traddr": "192.168.100.8", 00:20:51.786 "trsvcid": "47456" 00:20:51.786 }, 00:20:51.786 "auth": { 00:20:51.786 "state": "completed", 00:20:51.786 "digest": "sha512", 00:20:51.786 "dhgroup": "ffdhe3072" 00:20:51.786 } 00:20:51.786 } 00:20:51.786 ]' 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.786 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.045 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:52.045 01:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:52.615 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.874 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:52.874 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.874 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.874 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.874 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.874 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.874 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.132 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:53.132 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.132 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.132 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:53.132 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.133 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.133 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:20:53.133 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.133 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.133 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.133 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.133 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.133 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.391 00:20:53.391 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.391 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.391 01:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.650 { 00:20:53.650 "cntlid": 119, 00:20:53.650 "qid": 0, 00:20:53.650 "state": "enabled", 00:20:53.650 "thread": "nvmf_tgt_poll_group_000", 00:20:53.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:53.650 "listen_address": { 00:20:53.650 "trtype": "RDMA", 00:20:53.650 "adrfam": "IPv4", 00:20:53.650 "traddr": "192.168.100.8", 00:20:53.650 "trsvcid": "4420" 00:20:53.650 }, 00:20:53.650 "peer_address": { 00:20:53.650 "trtype": "RDMA", 00:20:53.650 "adrfam": "IPv4", 00:20:53.650 "traddr": "192.168.100.8", 00:20:53.650 "trsvcid": "35682" 00:20:53.650 }, 00:20:53.650 "auth": { 00:20:53.650 "state": "completed", 00:20:53.650 "digest": "sha512", 00:20:53.650 "dhgroup": "ffdhe3072" 00:20:53.650 } 00:20:53.650 } 00:20:53.650 ]' 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.650 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.910 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:53.910 01:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:20:54.492 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.492 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:54.492 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.492 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.492 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.492 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.492 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.492 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:54.492 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.766 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.043 00:20:55.043 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.043 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.043 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.314 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.314 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.314 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.314 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.314 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.314 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.314 { 00:20:55.314 "cntlid": 121, 00:20:55.314 "qid": 0, 00:20:55.314 "state": "enabled", 00:20:55.314 "thread": "nvmf_tgt_poll_group_000", 00:20:55.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:55.314 "listen_address": { 00:20:55.314 "trtype": "RDMA", 00:20:55.314 "adrfam": "IPv4", 00:20:55.314 "traddr": "192.168.100.8", 00:20:55.314 "trsvcid": "4420" 00:20:55.314 }, 00:20:55.314 "peer_address": { 00:20:55.314 "trtype": "RDMA", 00:20:55.314 "adrfam": "IPv4", 00:20:55.314 "traddr": "192.168.100.8", 00:20:55.314 "trsvcid": "56253" 00:20:55.314 }, 00:20:55.314 "auth": { 00:20:55.314 "state": "completed", 00:20:55.314 "digest": "sha512", 00:20:55.314 "dhgroup": "ffdhe4096" 00:20:55.314 } 00:20:55.314 } 00:20:55.314 ]' 00:20:55.314 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.315 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.315 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.315 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.315 01:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.595 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.595 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.595 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.595 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:55.595 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:20:56.227 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.504 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:56.504 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.504 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.504 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.504 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.504 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:56.504 01:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.504 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.779 00:20:56.779 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.779 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.779 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.064 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.064 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.064 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.064 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.064 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.064 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.064 { 00:20:57.064 "cntlid": 123, 00:20:57.064 "qid": 0, 00:20:57.064 "state": "enabled", 00:20:57.064 "thread": "nvmf_tgt_poll_group_000", 00:20:57.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:57.064 "listen_address": { 00:20:57.064 "trtype": "RDMA", 00:20:57.064 "adrfam": "IPv4", 00:20:57.064 "traddr": "192.168.100.8", 00:20:57.064 "trsvcid": "4420" 00:20:57.064 }, 00:20:57.064 "peer_address": { 00:20:57.064 "trtype": "RDMA", 00:20:57.064 "adrfam": "IPv4", 00:20:57.064 "traddr": "192.168.100.8", 00:20:57.064 "trsvcid": "42793" 00:20:57.064 }, 00:20:57.064 "auth": { 00:20:57.064 "state": "completed", 00:20:57.064 "digest": "sha512", 00:20:57.064 "dhgroup": "ffdhe4096" 00:20:57.064 } 00:20:57.064 } 00:20:57.064 ]' 00:20:57.064 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.064 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.064 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.064 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.064 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.344 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.344 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.344 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.344 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:57.344 01:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:20:57.946 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.213 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:58.213 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.213 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.213 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.213 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.213 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.213 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.485 01:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.767 00:20:58.767 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.767 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.767 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.767 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.767 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.767 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.767 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.767 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.767 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.767 { 00:20:58.767 "cntlid": 125, 00:20:58.767 "qid": 0, 00:20:58.767 "state": "enabled", 00:20:58.768 "thread": "nvmf_tgt_poll_group_000", 00:20:58.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:58.768 "listen_address": { 00:20:58.768 "trtype": "RDMA", 00:20:58.768 "adrfam": "IPv4", 00:20:58.768 "traddr": "192.168.100.8", 00:20:58.768 "trsvcid": "4420" 00:20:58.768 }, 00:20:58.768 "peer_address": { 00:20:58.768 "trtype": "RDMA", 00:20:58.768 "adrfam": "IPv4", 00:20:58.768 "traddr": "192.168.100.8", 00:20:58.768 "trsvcid": "58012" 00:20:58.768 }, 00:20:58.768 "auth": { 00:20:58.768 "state": "completed", 00:20:58.768 "digest": "sha512", 00:20:58.768 "dhgroup": "ffdhe4096" 00:20:58.768 } 00:20:58.768 } 00:20:58.768 ]' 00:20:58.768 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.050 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.050 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.050 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:59.050 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.050 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.050 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.050 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.364 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:59.364 01:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:20:59.930 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.930 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:59.930 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.930 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.930 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.930 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.930 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.930 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.188 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:00.188 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.188 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.189 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:00.189 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:00.189 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.189 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:00.189 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.189 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.189 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.189 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.189 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.189 01:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.446 00:21:00.446 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.446 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.446 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.705 { 00:21:00.705 "cntlid": 127, 00:21:00.705 "qid": 0, 00:21:00.705 "state": "enabled", 00:21:00.705 "thread": "nvmf_tgt_poll_group_000", 00:21:00.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:00.705 "listen_address": { 00:21:00.705 "trtype": "RDMA", 00:21:00.705 "adrfam": "IPv4", 00:21:00.705 "traddr": "192.168.100.8", 00:21:00.705 "trsvcid": "4420" 00:21:00.705 }, 00:21:00.705 "peer_address": { 00:21:00.705 "trtype": "RDMA", 00:21:00.705 "adrfam": "IPv4", 00:21:00.705 "traddr": "192.168.100.8", 00:21:00.705 "trsvcid": "41824" 00:21:00.705 }, 00:21:00.705 "auth": { 00:21:00.705 "state": "completed", 00:21:00.705 "digest": "sha512", 00:21:00.705 "dhgroup": "ffdhe4096" 00:21:00.705 } 00:21:00.705 } 00:21:00.705 ]' 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.705 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.963 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:21:00.963 01:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:21:01.529 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.787 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.045 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.303 00:21:02.303 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.303 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.303 01:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.561 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.561 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.561 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.561 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.561 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.561 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.561 { 00:21:02.561 "cntlid": 129, 00:21:02.561 "qid": 0, 00:21:02.561 "state": "enabled", 00:21:02.561 "thread": "nvmf_tgt_poll_group_000", 00:21:02.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:02.561 "listen_address": { 00:21:02.561 "trtype": "RDMA", 00:21:02.561 "adrfam": "IPv4", 00:21:02.561 "traddr": "192.168.100.8", 00:21:02.561 "trsvcid": "4420" 00:21:02.561 }, 00:21:02.562 "peer_address": { 00:21:02.562 "trtype": "RDMA", 00:21:02.562 "adrfam": "IPv4", 00:21:02.562 "traddr": "192.168.100.8", 00:21:02.562 "trsvcid": "60347" 00:21:02.562 }, 00:21:02.562 "auth": { 00:21:02.562 "state": "completed", 00:21:02.562 "digest": "sha512", 00:21:02.562 "dhgroup": "ffdhe6144" 00:21:02.562 } 00:21:02.562 } 00:21:02.562 ]' 00:21:02.562 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.562 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.562 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.562 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:02.562 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.562 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.562 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.562 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.819 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:21:02.819 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:21:03.384 01:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.642 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:03.642 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.642 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.642 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.642 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.642 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.642 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.901 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.159 00:21:04.159 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.159 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.159 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.417 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.417 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.417 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.417 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.417 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.417 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.417 { 00:21:04.417 "cntlid": 131, 00:21:04.417 "qid": 0, 00:21:04.417 "state": "enabled", 00:21:04.417 "thread": "nvmf_tgt_poll_group_000", 00:21:04.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:04.417 "listen_address": { 00:21:04.417 "trtype": "RDMA", 00:21:04.417 "adrfam": "IPv4", 00:21:04.417 "traddr": "192.168.100.8", 00:21:04.417 "trsvcid": "4420" 00:21:04.417 }, 00:21:04.417 "peer_address": { 00:21:04.417 "trtype": "RDMA", 00:21:04.417 "adrfam": "IPv4", 00:21:04.417 "traddr": "192.168.100.8", 00:21:04.417 "trsvcid": "45565" 00:21:04.417 }, 00:21:04.417 "auth": { 00:21:04.417 "state": "completed", 00:21:04.417 "digest": "sha512", 00:21:04.417 "dhgroup": "ffdhe6144" 00:21:04.417 } 00:21:04.417 } 00:21:04.417 ]' 00:21:04.417 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.417 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.417 01:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.417 01:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:04.417 01:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.417 01:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.417 01:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.417 01:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.676 01:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:21:04.676 01:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:21:05.241 01:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.500 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:05.500 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.500 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.500 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.500 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.500 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.500 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.758 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.017 00:21:06.017 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.017 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.017 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.275 { 00:21:06.275 "cntlid": 133, 00:21:06.275 "qid": 0, 00:21:06.275 "state": "enabled", 00:21:06.275 "thread": "nvmf_tgt_poll_group_000", 00:21:06.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:06.275 "listen_address": { 00:21:06.275 "trtype": "RDMA", 00:21:06.275 "adrfam": "IPv4", 00:21:06.275 "traddr": "192.168.100.8", 00:21:06.275 "trsvcid": "4420" 00:21:06.275 }, 00:21:06.275 "peer_address": { 00:21:06.275 "trtype": "RDMA", 00:21:06.275 "adrfam": "IPv4", 00:21:06.275 "traddr": "192.168.100.8", 00:21:06.275 "trsvcid": "50644" 00:21:06.275 }, 00:21:06.275 "auth": { 00:21:06.275 "state": "completed", 00:21:06.275 "digest": "sha512", 00:21:06.275 "dhgroup": "ffdhe6144" 00:21:06.275 } 00:21:06.275 } 00:21:06.275 ]' 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.275 01:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.533 01:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:21:06.533 01:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:21:07.099 01:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.357 01:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:07.357 01:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.357 01:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.357 01:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.357 01:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.357 01:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:07.357 01:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.615 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.874 00:21:07.874 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.874 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.874 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.132 { 00:21:08.132 "cntlid": 135, 00:21:08.132 "qid": 0, 00:21:08.132 "state": "enabled", 00:21:08.132 "thread": "nvmf_tgt_poll_group_000", 00:21:08.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:08.132 "listen_address": { 00:21:08.132 "trtype": "RDMA", 00:21:08.132 "adrfam": "IPv4", 00:21:08.132 "traddr": "192.168.100.8", 00:21:08.132 "trsvcid": "4420" 00:21:08.132 }, 00:21:08.132 "peer_address": { 00:21:08.132 "trtype": "RDMA", 00:21:08.132 "adrfam": "IPv4", 00:21:08.132 "traddr": "192.168.100.8", 00:21:08.132 "trsvcid": "33020" 00:21:08.132 }, 00:21:08.132 "auth": { 00:21:08.132 "state": "completed", 00:21:08.132 "digest": "sha512", 00:21:08.132 "dhgroup": "ffdhe6144" 00:21:08.132 } 00:21:08.132 } 00:21:08.132 ]' 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.132 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.390 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:21:08.390 01:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:21:08.956 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.214 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.473 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.473 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.473 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.473 01:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.731 00:21:09.731 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.731 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.731 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.989 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.989 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.989 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.989 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.989 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.989 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.989 { 00:21:09.989 "cntlid": 137, 00:21:09.989 "qid": 0, 00:21:09.989 "state": "enabled", 00:21:09.989 "thread": "nvmf_tgt_poll_group_000", 00:21:09.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:09.989 "listen_address": { 00:21:09.989 "trtype": "RDMA", 00:21:09.989 "adrfam": "IPv4", 00:21:09.989 "traddr": "192.168.100.8", 00:21:09.989 "trsvcid": "4420" 00:21:09.989 }, 00:21:09.989 "peer_address": { 00:21:09.989 "trtype": "RDMA", 00:21:09.989 "adrfam": "IPv4", 00:21:09.989 "traddr": "192.168.100.8", 00:21:09.989 "trsvcid": "49974" 00:21:09.989 }, 00:21:09.989 "auth": { 00:21:09.989 "state": "completed", 00:21:09.989 "digest": "sha512", 00:21:09.989 "dhgroup": "ffdhe8192" 00:21:09.989 } 00:21:09.989 } 00:21:09.989 ]' 00:21:09.989 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.989 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.989 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.989 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.989 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.247 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.247 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.247 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.505 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:21:10.505 01:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:21:11.073 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.073 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:11.073 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.073 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.073 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.073 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.074 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.074 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.332 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:11.332 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.332 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.332 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:11.332 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.332 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.332 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.332 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.332 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.332 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.332 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.333 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.333 01:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.900 00:21:11.900 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.900 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.900 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.158 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.158 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.158 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.159 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.159 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.159 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.159 { 00:21:12.159 "cntlid": 139, 00:21:12.159 "qid": 0, 00:21:12.159 "state": "enabled", 00:21:12.159 "thread": "nvmf_tgt_poll_group_000", 00:21:12.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:12.159 "listen_address": { 00:21:12.159 "trtype": "RDMA", 00:21:12.159 "adrfam": "IPv4", 00:21:12.159 "traddr": "192.168.100.8", 00:21:12.159 "trsvcid": "4420" 00:21:12.159 }, 00:21:12.159 "peer_address": { 00:21:12.159 "trtype": "RDMA", 00:21:12.159 "adrfam": "IPv4", 00:21:12.159 "traddr": "192.168.100.8", 00:21:12.159 "trsvcid": "47699" 00:21:12.159 }, 00:21:12.159 "auth": { 00:21:12.159 "state": "completed", 00:21:12.159 "digest": "sha512", 00:21:12.159 "dhgroup": "ffdhe8192" 00:21:12.159 } 00:21:12.159 } 00:21:12.159 ]' 00:21:12.159 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.159 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.159 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.159 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.159 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.159 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.159 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.159 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.417 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:21:12.417 01:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: --dhchap-ctrl-secret DHHC-1:02:OWRjZTdjODE0NDNjNmVjYWYzYmUzZDM1OGE3NDk3Yjc2YmY2OTcyY2Y3NTUwNWE540+yNA==: 00:21:12.987 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.987 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:12.987 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.987 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.245 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.245 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.245 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.245 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.245 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:13.245 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.245 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.246 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:13.246 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:13.246 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.246 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.246 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.246 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.246 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.246 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.246 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.246 01:05:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.813 00:21:13.813 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.813 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.813 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.071 { 00:21:14.071 "cntlid": 141, 00:21:14.071 "qid": 0, 00:21:14.071 "state": "enabled", 00:21:14.071 "thread": "nvmf_tgt_poll_group_000", 00:21:14.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:14.071 "listen_address": { 00:21:14.071 "trtype": "RDMA", 00:21:14.071 "adrfam": "IPv4", 00:21:14.071 "traddr": "192.168.100.8", 00:21:14.071 "trsvcid": "4420" 00:21:14.071 }, 00:21:14.071 "peer_address": { 00:21:14.071 "trtype": "RDMA", 00:21:14.071 "adrfam": "IPv4", 00:21:14.071 "traddr": "192.168.100.8", 00:21:14.071 "trsvcid": "38371" 00:21:14.071 }, 00:21:14.071 "auth": { 00:21:14.071 "state": "completed", 00:21:14.071 "digest": "sha512", 00:21:14.071 "dhgroup": "ffdhe8192" 00:21:14.071 } 00:21:14.071 } 00:21:14.071 ]' 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.071 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.330 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:21:14.330 01:05:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:01:NWRhOTFhOTAwNmE1ZjViOGEwYWI2OTRhNzM1ZGU1N2JCxwyV: 00:21:14.897 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.156 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:15.156 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.156 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.156 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.156 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.156 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.156 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.415 01:05:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.982 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.982 { 00:21:15.982 "cntlid": 143, 00:21:15.982 "qid": 0, 00:21:15.982 "state": "enabled", 00:21:15.982 "thread": "nvmf_tgt_poll_group_000", 00:21:15.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:15.982 "listen_address": { 00:21:15.982 "trtype": "RDMA", 00:21:15.982 "adrfam": "IPv4", 00:21:15.982 "traddr": "192.168.100.8", 00:21:15.982 "trsvcid": "4420" 00:21:15.982 }, 00:21:15.982 "peer_address": { 00:21:15.982 "trtype": "RDMA", 00:21:15.982 "adrfam": "IPv4", 00:21:15.982 "traddr": "192.168.100.8", 00:21:15.982 "trsvcid": "54812" 00:21:15.982 }, 00:21:15.982 "auth": { 00:21:15.982 "state": "completed", 00:21:15.982 "digest": "sha512", 00:21:15.982 "dhgroup": "ffdhe8192" 00:21:15.982 } 00:21:15.982 } 00:21:15.982 ]' 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.982 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.240 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.240 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.240 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.498 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:21:16.498 01:05:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:21:17.065 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.065 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:17.065 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.065 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.065 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.065 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:17.065 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:17.065 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:17.065 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:17.065 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:17.065 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:17.323 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.324 01:05:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.890 00:21:17.890 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.890 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.890 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.890 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.890 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.890 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.890 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.890 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.148 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.148 { 00:21:18.148 "cntlid": 145, 00:21:18.148 "qid": 0, 00:21:18.148 "state": "enabled", 00:21:18.148 "thread": "nvmf_tgt_poll_group_000", 00:21:18.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:18.148 "listen_address": { 00:21:18.148 "trtype": "RDMA", 00:21:18.148 "adrfam": "IPv4", 00:21:18.148 "traddr": "192.168.100.8", 00:21:18.148 "trsvcid": "4420" 00:21:18.148 }, 00:21:18.148 "peer_address": { 00:21:18.148 "trtype": "RDMA", 00:21:18.148 "adrfam": "IPv4", 00:21:18.148 "traddr": "192.168.100.8", 00:21:18.148 "trsvcid": "59238" 00:21:18.148 }, 00:21:18.148 "auth": { 00:21:18.148 "state": "completed", 00:21:18.148 "digest": "sha512", 00:21:18.148 "dhgroup": "ffdhe8192" 00:21:18.148 } 00:21:18.148 } 00:21:18.148 ]' 00:21:18.148 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.148 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.148 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.148 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.148 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.148 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.148 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.148 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.407 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:21:18.407 01:05:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTQ1ZTM0M2Q3NDEwMTk3MDhmODczZjgzYzAwZGE5Zjg4YzczN2U3MzU1ODBiNTdjr5ob9w==: --dhchap-ctrl-secret DHHC-1:03:ZjBiMDNkNmYzZWMzYzA1YTdmOGU5MTk1OGIwNzEwMTczOTNkZWQ3OWRiN2ViZjM0MDUyZDQ3OTNiYWUzYTlkYvj+sLE=: 00:21:18.973 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:19.232 01:05:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:19.491 request: 00:21:19.491 { 00:21:19.491 "name": "nvme0", 00:21:19.491 "trtype": "rdma", 00:21:19.491 "traddr": "192.168.100.8", 00:21:19.491 "adrfam": "ipv4", 00:21:19.491 "trsvcid": "4420", 00:21:19.491 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:19.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:19.491 "prchk_reftag": false, 00:21:19.491 "prchk_guard": false, 00:21:19.491 "hdgst": false, 00:21:19.491 "ddgst": false, 00:21:19.491 "dhchap_key": "key2", 00:21:19.491 "allow_unrecognized_csi": false, 00:21:19.491 "method": "bdev_nvme_attach_controller", 00:21:19.491 "req_id": 1 00:21:19.491 } 00:21:19.491 Got JSON-RPC error response 00:21:19.491 response: 00:21:19.491 { 00:21:19.491 "code": -5, 00:21:19.491 "message": "Input/output error" 00:21:19.491 } 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:19.749 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.316 request: 00:21:20.316 { 00:21:20.316 "name": "nvme0", 00:21:20.316 "trtype": "rdma", 00:21:20.316 "traddr": "192.168.100.8", 00:21:20.316 "adrfam": "ipv4", 00:21:20.316 "trsvcid": "4420", 00:21:20.316 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:20.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:20.316 "prchk_reftag": false, 00:21:20.316 "prchk_guard": false, 00:21:20.316 "hdgst": false, 00:21:20.316 "ddgst": false, 00:21:20.316 "dhchap_key": "key1", 00:21:20.316 "dhchap_ctrlr_key": "ckey2", 00:21:20.316 "allow_unrecognized_csi": false, 00:21:20.316 "method": "bdev_nvme_attach_controller", 00:21:20.316 "req_id": 1 00:21:20.316 } 00:21:20.316 Got JSON-RPC error response 00:21:20.316 response: 00:21:20.316 { 00:21:20.316 "code": -5, 00:21:20.316 "message": "Input/output error" 00:21:20.316 } 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.316 01:05:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.575 request: 00:21:20.575 { 00:21:20.575 "name": "nvme0", 00:21:20.575 "trtype": "rdma", 00:21:20.575 "traddr": "192.168.100.8", 00:21:20.575 "adrfam": "ipv4", 00:21:20.575 "trsvcid": "4420", 00:21:20.575 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:20.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:20.575 "prchk_reftag": false, 00:21:20.575 "prchk_guard": false, 00:21:20.575 "hdgst": false, 00:21:20.575 "ddgst": false, 00:21:20.575 "dhchap_key": "key1", 00:21:20.575 "dhchap_ctrlr_key": "ckey1", 00:21:20.575 "allow_unrecognized_csi": false, 00:21:20.575 "method": "bdev_nvme_attach_controller", 00:21:20.575 "req_id": 1 00:21:20.575 } 00:21:20.575 Got JSON-RPC error response 00:21:20.575 response: 00:21:20.575 { 00:21:20.575 "code": -5, 00:21:20.575 "message": "Input/output error" 00:21:20.575 } 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 355977 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 355977 ']' 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 355977 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 355977 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 355977' 00:21:20.833 killing process with pid 355977 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 355977 00:21:20.833 01:05:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 355977 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=380511 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 380511 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 380511 ']' 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.209 01:05:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 380511 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 380511 ']' 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.776 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.777 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.777 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.035 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.035 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:23.035 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:23.035 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.035 01:05:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.293 null0 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PGy 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.t87 ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.t87 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BZB 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.wKU ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wKU 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Rdj 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.tNA ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tNA 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Yli 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.551 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.488 nvme0n1 00:21:24.488 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.488 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.488 01:05:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.488 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.488 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.488 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.488 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.488 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.488 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.488 { 00:21:24.488 "cntlid": 1, 00:21:24.488 "qid": 0, 00:21:24.488 "state": "enabled", 00:21:24.488 "thread": "nvmf_tgt_poll_group_000", 00:21:24.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:24.488 "listen_address": { 00:21:24.488 "trtype": "RDMA", 00:21:24.488 "adrfam": "IPv4", 00:21:24.488 "traddr": "192.168.100.8", 00:21:24.488 "trsvcid": "4420" 00:21:24.488 }, 00:21:24.488 "peer_address": { 00:21:24.488 "trtype": "RDMA", 00:21:24.488 "adrfam": "IPv4", 00:21:24.488 "traddr": "192.168.100.8", 00:21:24.488 "trsvcid": "54132" 00:21:24.488 }, 00:21:24.488 "auth": { 00:21:24.488 "state": "completed", 00:21:24.488 "digest": "sha512", 00:21:24.488 "dhgroup": "ffdhe8192" 00:21:24.488 } 00:21:24.488 } 00:21:24.488 ]' 00:21:24.488 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.488 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.488 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.746 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.746 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.746 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.746 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.746 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.004 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:21:25.004 01:05:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:21:25.570 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.570 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:25.570 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.570 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.570 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.570 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:25.570 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.570 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.570 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.570 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:25.570 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:25.828 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:25.828 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:25.828 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:25.828 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:25.828 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:25.828 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:25.828 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:25.828 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.828 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.828 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.087 request: 00:21:26.087 { 00:21:26.087 "name": "nvme0", 00:21:26.087 "trtype": "rdma", 00:21:26.087 "traddr": "192.168.100.8", 00:21:26.087 "adrfam": "ipv4", 00:21:26.087 "trsvcid": "4420", 00:21:26.087 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:26.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:26.087 "prchk_reftag": false, 00:21:26.087 "prchk_guard": false, 00:21:26.087 "hdgst": false, 00:21:26.087 "ddgst": false, 00:21:26.087 "dhchap_key": "key3", 00:21:26.087 "allow_unrecognized_csi": false, 00:21:26.087 "method": "bdev_nvme_attach_controller", 00:21:26.087 "req_id": 1 00:21:26.087 } 00:21:26.087 Got JSON-RPC error response 00:21:26.087 response: 00:21:26.087 { 00:21:26.087 "code": -5, 00:21:26.087 "message": "Input/output error" 00:21:26.087 } 00:21:26.087 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:26.087 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:26.087 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:26.087 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:26.087 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:26.087 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:26.087 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:26.087 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:26.347 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:26.347 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:26.347 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:26.347 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:26.347 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:26.347 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:26.347 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:26.347 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.347 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.348 01:05:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.608 request: 00:21:26.608 { 00:21:26.608 "name": "nvme0", 00:21:26.608 "trtype": "rdma", 00:21:26.608 "traddr": "192.168.100.8", 00:21:26.608 "adrfam": "ipv4", 00:21:26.608 "trsvcid": "4420", 00:21:26.608 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:26.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:26.608 "prchk_reftag": false, 00:21:26.608 "prchk_guard": false, 00:21:26.608 "hdgst": false, 00:21:26.608 "ddgst": false, 00:21:26.608 "dhchap_key": "key3", 00:21:26.608 "allow_unrecognized_csi": false, 00:21:26.608 "method": "bdev_nvme_attach_controller", 00:21:26.608 "req_id": 1 00:21:26.608 } 00:21:26.608 Got JSON-RPC error response 00:21:26.608 response: 00:21:26.608 { 00:21:26.608 "code": -5, 00:21:26.608 "message": "Input/output error" 00:21:26.608 } 00:21:26.608 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:26.608 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:26.608 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:26.608 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:26.608 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:26.608 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:26.608 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:26.608 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:26.608 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:26.608 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:26.868 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:27.126 request: 00:21:27.126 { 00:21:27.126 "name": "nvme0", 00:21:27.126 "trtype": "rdma", 00:21:27.126 "traddr": "192.168.100.8", 00:21:27.126 "adrfam": "ipv4", 00:21:27.126 "trsvcid": "4420", 00:21:27.127 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:27.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:27.127 "prchk_reftag": false, 00:21:27.127 "prchk_guard": false, 00:21:27.127 "hdgst": false, 00:21:27.127 "ddgst": false, 00:21:27.127 "dhchap_key": "key0", 00:21:27.127 "dhchap_ctrlr_key": "key1", 00:21:27.127 "allow_unrecognized_csi": false, 00:21:27.127 "method": "bdev_nvme_attach_controller", 00:21:27.127 "req_id": 1 00:21:27.127 } 00:21:27.127 Got JSON-RPC error response 00:21:27.127 response: 00:21:27.127 { 00:21:27.127 "code": -5, 00:21:27.127 "message": "Input/output error" 00:21:27.127 } 00:21:27.127 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:27.127 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.127 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.127 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.127 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:27.127 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:27.127 01:05:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:27.385 nvme0n1 00:21:27.385 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:27.385 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:27.385 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.644 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.644 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.644 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.903 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:21:27.903 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.903 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.903 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.903 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:27.903 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:27.903 01:05:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:28.839 nvme0n1 00:21:28.839 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:28.839 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:28.839 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.839 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.839 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:28.839 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.839 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.839 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.839 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:28.839 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:28.839 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.098 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.098 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:21:29.098 01:05:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: --dhchap-ctrl-secret DHHC-1:03:NWY4M2MxNzE0MjJhNzcwM2YyYTg2ZThmODJjMTMwYmVlNTdhYjNhMjc5MGFkZTI0Y2YwOTY1NGE1YWE3NzJkNxoPYhA=: 00:21:29.664 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:29.664 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:29.664 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:29.664 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:29.664 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:29.664 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:29.664 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:29.664 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.664 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.922 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:29.922 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:29.922 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:29.922 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:29.922 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.923 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:29.923 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.923 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:29.923 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:29.923 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:30.489 request: 00:21:30.489 { 00:21:30.489 "name": "nvme0", 00:21:30.489 "trtype": "rdma", 00:21:30.489 "traddr": "192.168.100.8", 00:21:30.489 "adrfam": "ipv4", 00:21:30.489 "trsvcid": "4420", 00:21:30.489 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:30.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:30.489 "prchk_reftag": false, 00:21:30.489 "prchk_guard": false, 00:21:30.489 "hdgst": false, 00:21:30.489 "ddgst": false, 00:21:30.489 "dhchap_key": "key1", 00:21:30.489 "allow_unrecognized_csi": false, 00:21:30.489 "method": "bdev_nvme_attach_controller", 00:21:30.489 "req_id": 1 00:21:30.489 } 00:21:30.489 Got JSON-RPC error response 00:21:30.489 response: 00:21:30.489 { 00:21:30.489 "code": -5, 00:21:30.489 "message": "Input/output error" 00:21:30.489 } 00:21:30.489 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:30.489 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:30.489 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:30.489 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:30.489 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:30.489 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:30.489 01:05:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:31.056 nvme0n1 00:21:31.056 01:05:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:31.056 01:05:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:31.056 01:05:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.314 01:05:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.314 01:05:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.314 01:05:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.573 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:31.573 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.574 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.574 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.574 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:31.574 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:31.574 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:31.833 nvme0n1 00:21:31.833 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:31.833 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:31.833 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.091 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.091 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.091 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.349 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: '' 2s 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: ]] 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NmI0ODI3MWRiMzA1YmYzMWE5NDZjYmY3NzM1ZTZkMmaW4MbZ: 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:32.350 01:05:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: 2s 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: 00:21:34.250 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:34.251 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:34.251 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:34.251 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: ]] 00:21:34.251 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDhiYTAyOWU2M2UyZWU5Njg3NzgwMTE5NmM0MjZiOTVkOTY5NTBjYWUzYWMxODRi3oKgBw==: 00:21:34.251 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:34.251 01:05:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:36.781 01:05:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:36.781 01:05:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:36.781 01:05:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:36.781 01:05:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:36.781 01:05:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:36.781 01:05:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:36.781 01:05:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:36.781 01:05:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.781 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:36.781 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.781 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.781 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.781 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:36.781 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:36.781 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:37.349 nvme0n1 00:21:37.349 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:37.349 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.349 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.349 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.349 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:37.349 01:05:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:37.917 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:37.917 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:37.917 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.917 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.917 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:37.917 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.917 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.917 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.917 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:37.917 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:38.176 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:38.176 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.176 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:38.434 01:05:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:39.002 request: 00:21:39.002 { 00:21:39.002 "name": "nvme0", 00:21:39.002 "dhchap_key": "key1", 00:21:39.002 "dhchap_ctrlr_key": "key3", 00:21:39.002 "method": "bdev_nvme_set_keys", 00:21:39.002 "req_id": 1 00:21:39.002 } 00:21:39.002 Got JSON-RPC error response 00:21:39.002 response: 00:21:39.002 { 00:21:39.002 "code": -13, 00:21:39.002 "message": "Permission denied" 00:21:39.002 } 00:21:39.002 01:05:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:39.002 01:05:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.002 01:05:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.002 01:05:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.002 01:05:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:39.002 01:05:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:39.002 01:05:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.002 01:05:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:39.002 01:05:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:40.378 01:05:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:40.378 01:05:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:40.378 01:05:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.378 01:05:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:40.378 01:05:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:40.378 01:05:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.378 01:05:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.378 01:05:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.378 01:05:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:40.378 01:05:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:40.378 01:05:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:40.945 nvme0n1 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:40.945 01:05:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:41.512 request: 00:21:41.512 { 00:21:41.512 "name": "nvme0", 00:21:41.512 "dhchap_key": "key2", 00:21:41.512 "dhchap_ctrlr_key": "key0", 00:21:41.512 "method": "bdev_nvme_set_keys", 00:21:41.512 "req_id": 1 00:21:41.512 } 00:21:41.512 Got JSON-RPC error response 00:21:41.512 response: 00:21:41.512 { 00:21:41.512 "code": -13, 00:21:41.512 "message": "Permission denied" 00:21:41.512 } 00:21:41.512 01:05:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:41.512 01:05:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.512 01:05:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.512 01:05:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.512 01:05:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:41.512 01:05:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:41.512 01:05:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.771 01:05:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:41.771 01:05:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:42.704 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:42.704 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:42.704 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.960 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 356217 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 356217 ']' 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 356217 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 356217 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 356217' 00:21:42.961 killing process with pid 356217 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 356217 00:21:42.961 01:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 356217 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:45.489 rmmod nvme_rdma 00:21:45.489 rmmod nvme_fabrics 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 380511 ']' 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 380511 00:21:45.489 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 380511 ']' 00:21:45.490 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 380511 00:21:45.490 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:45.490 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.490 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380511 00:21:45.490 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.490 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.490 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380511' 00:21:45.490 killing process with pid 380511 00:21:45.490 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 380511 00:21:45.490 01:05:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 380511 00:21:46.426 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:46.426 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:46.426 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.PGy /tmp/spdk.key-sha256.BZB /tmp/spdk.key-sha384.Rdj /tmp/spdk.key-sha512.Yli /tmp/spdk.key-sha512.t87 /tmp/spdk.key-sha384.wKU /tmp/spdk.key-sha256.tNA '' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf-auth.log 00:21:46.426 00:21:46.426 real 2m52.496s 00:21:46.426 user 6m35.711s 00:21:46.426 sys 0m22.212s 00:21:46.426 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.426 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.426 ************************************ 00:21:46.426 END TEST nvmf_auth_target 00:21:46.426 ************************************ 00:21:46.426 01:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:21:46.426 01:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:21:46.426 01:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:21:46.426 01:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:46.426 01:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.426 01:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.685 ************************************ 00:21:46.685 START TEST nvmf_fuzz 00:21:46.685 ************************************ 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:21:46.685 * Looking for test storage... 00:21:46.685 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:21:46.685 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:46.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.686 --rc genhtml_branch_coverage=1 00:21:46.686 --rc genhtml_function_coverage=1 00:21:46.686 --rc genhtml_legend=1 00:21:46.686 --rc geninfo_all_blocks=1 00:21:46.686 --rc geninfo_unexecuted_blocks=1 00:21:46.686 00:21:46.686 ' 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:46.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.686 --rc genhtml_branch_coverage=1 00:21:46.686 --rc genhtml_function_coverage=1 00:21:46.686 --rc genhtml_legend=1 00:21:46.686 --rc geninfo_all_blocks=1 00:21:46.686 --rc geninfo_unexecuted_blocks=1 00:21:46.686 00:21:46.686 ' 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:46.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.686 --rc genhtml_branch_coverage=1 00:21:46.686 --rc genhtml_function_coverage=1 00:21:46.686 --rc genhtml_legend=1 00:21:46.686 --rc geninfo_all_blocks=1 00:21:46.686 --rc geninfo_unexecuted_blocks=1 00:21:46.686 00:21:46.686 ' 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:46.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.686 --rc genhtml_branch_coverage=1 00:21:46.686 --rc genhtml_function_coverage=1 00:21:46.686 --rc genhtml_legend=1 00:21:46.686 --rc geninfo_all_blocks=1 00:21:46.686 --rc geninfo_unexecuted_blocks=1 00:21:46.686 00:21:46.686 ' 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.686 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.686 01:05:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.257 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:53.258 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:53.258 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@405 -- # modinfo irdma 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:53.258 Found net devices under 0000:af:00.0: cvl_0_0 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:53.258 Found net devices under 0000:af:00.1: cvl_0_1 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # rdma_device_init 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo cvl_0_0 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo cvl_0_1 00:21:53.258 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:53.259 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:53.259 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:21:53.259 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:21:53.259 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:21:53.259 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:53.259 01:05:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:21:53.259 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:53.259 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:21:53.259 altname enp175s0f0np0 00:21:53.259 altname ens801f0np0 00:21:53.259 inet 192.168.100.8/24 scope global cvl_0_0 00:21:53.259 valid_lft forever preferred_lft forever 00:21:53.259 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:21:53.259 valid_lft forever preferred_lft forever 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:21:53.259 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:53.259 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:21:53.259 altname enp175s0f1np1 00:21:53.259 altname ens801f1np1 00:21:53.259 inet 192.168.100.9/24 scope global cvl_0_1 00:21:53.259 valid_lft forever preferred_lft forever 00:21:53.259 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:21:53.259 valid_lft forever preferred_lft forever 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo cvl_0_0 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo cvl_0_1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:53.259 192.168.100.9' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:53.259 192.168.100.9' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # head -n 1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:53.259 192.168.100.9' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # tail -n +2 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # head -n 1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=387560 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 387560 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 387560 ']' 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.259 01:05:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:53.518 Malloc0 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:21:53.518 01:06:00 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:22:25.582 Fuzzing completed. Shutting down the fuzz application 00:22:25.582 00:22:25.582 Dumping successful admin opcodes: 00:22:25.582 8, 9, 10, 24, 00:22:25.582 Dumping successful io opcodes: 00:22:25.582 0, 9, 00:22:25.582 NS: 0x2000008f0ec0 I/O qp, Total commands completed: 989078, total successful commands: 5791, random_seed: 2061929408 00:22:25.582 NS: 0x2000008f0ec0 admin qp, Total commands completed: 125087, total successful commands: 1025, random_seed: 2289664192 00:22:25.582 01:06:31 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:26.149 Fuzzing completed. Shutting down the fuzz application 00:22:26.149 00:22:26.149 Dumping successful admin opcodes: 00:22:26.149 24, 00:22:26.149 Dumping successful io opcodes: 00:22:26.149 00:22:26.149 NS: 0x2000008f0ec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2922336804 00:22:26.149 NS: 0x2000008f0ec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2922412642 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:26.149 rmmod nvme_rdma 00:22:26.149 rmmod nvme_fabrics 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 387560 ']' 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 387560 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 387560 ']' 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 387560 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387560 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387560' 00:22:26.149 killing process with pid 387560 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 387560 00:22:26.149 01:06:32 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 387560 00:22:27.526 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:27.526 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:27.526 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:27.526 00:22:27.526 real 0m41.010s 00:22:27.526 user 0m58.088s 00:22:27.526 sys 0m15.242s 00:22:27.526 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.526 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:27.526 ************************************ 00:22:27.526 END TEST nvmf_fuzz 00:22:27.526 ************************************ 00:22:27.526 01:06:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:22:27.526 01:06:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:27.526 01:06:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:27.526 01:06:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:27.786 ************************************ 00:22:27.786 START TEST nvmf_multiconnection 00:22:27.786 ************************************ 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:22:27.786 * Looking for test storage... 00:22:27.786 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:27.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.786 --rc genhtml_branch_coverage=1 00:22:27.786 --rc genhtml_function_coverage=1 00:22:27.786 --rc genhtml_legend=1 00:22:27.786 --rc geninfo_all_blocks=1 00:22:27.786 --rc geninfo_unexecuted_blocks=1 00:22:27.786 00:22:27.786 ' 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:27.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.786 --rc genhtml_branch_coverage=1 00:22:27.786 --rc genhtml_function_coverage=1 00:22:27.786 --rc genhtml_legend=1 00:22:27.786 --rc geninfo_all_blocks=1 00:22:27.786 --rc geninfo_unexecuted_blocks=1 00:22:27.786 00:22:27.786 ' 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:27.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.786 --rc genhtml_branch_coverage=1 00:22:27.786 --rc genhtml_function_coverage=1 00:22:27.786 --rc genhtml_legend=1 00:22:27.786 --rc geninfo_all_blocks=1 00:22:27.786 --rc geninfo_unexecuted_blocks=1 00:22:27.786 00:22:27.786 ' 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:27.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.786 --rc genhtml_branch_coverage=1 00:22:27.786 --rc genhtml_function_coverage=1 00:22:27.786 --rc genhtml_legend=1 00:22:27.786 --rc geninfo_all_blocks=1 00:22:27.786 --rc geninfo_unexecuted_blocks=1 00:22:27.786 00:22:27.786 ' 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.786 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:27.787 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.787 01:06:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.353 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:34.354 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:34.354 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@405 -- # modinfo irdma 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:34.354 01:06:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:34.354 Found net devices under 0000:af:00.0: cvl_0_0 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:34.354 Found net devices under 0000:af:00.1: cvl_0_1 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # rdma_device_init 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo cvl_0_0 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo cvl_0_1 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:22:34.354 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:22:34.355 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:22:34.355 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:22:34.355 altname enp175s0f0np0 00:22:34.355 altname ens801f0np0 00:22:34.355 inet 192.168.100.8/24 scope global cvl_0_0 00:22:34.355 valid_lft forever preferred_lft forever 00:22:34.355 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:22:34.355 valid_lft forever preferred_lft forever 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:22:34.355 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:22:34.355 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:22:34.355 altname enp175s0f1np1 00:22:34.355 altname ens801f1np1 00:22:34.355 inet 192.168.100.9/24 scope global cvl_0_1 00:22:34.355 valid_lft forever preferred_lft forever 00:22:34.355 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:22:34.355 valid_lft forever preferred_lft forever 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo cvl_0_0 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo cvl_0_1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:34.355 192.168.100.9' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:34.355 192.168.100.9' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # head -n 1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:34.355 192.168.100.9' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # tail -n +2 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # head -n 1 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=396748 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 396748 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 396748 ']' 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.355 01:06:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.355 [2024-11-19 01:06:40.295850] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:34.355 [2024-11-19 01:06:40.295944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.355 [2024-11-19 01:06:40.423240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.355 [2024-11-19 01:06:40.536536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.355 [2024-11-19 01:06:40.536580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.355 [2024-11-19 01:06:40.536590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.355 [2024-11-19 01:06:40.536601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.356 [2024-11-19 01:06:40.536609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.356 [2024-11-19 01:06:40.538850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.356 [2024-11-19 01:06:40.538940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.356 [2024-11-19 01:06:40.538947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.356 [2024-11-19 01:06:40.538970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.615 [2024-11-19 01:06:41.153512] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:22:34.615 [2024-11-19 01:06:41.163119] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:22:34.615 [2024-11-19 01:06:41.163148] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.615 Malloc1 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.615 [2024-11-19 01:06:41.291261] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.615 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.874 Malloc2 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.874 Malloc3 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.874 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 Malloc4 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 Malloc5 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 Malloc6 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.133 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.134 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.134 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.392 Malloc7 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.392 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.393 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:22:35.393 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.393 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.393 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.393 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.393 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:35.393 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.393 01:06:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.393 Malloc8 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.393 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.652 Malloc9 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.652 Malloc10 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.652 Malloc11 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.652 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:35.911 01:06:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:38.441 01:06:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:40.343 01:06:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:40.343 01:06:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:40.343 01:06:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:22:40.343 01:06:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:40.343 01:06:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:40.343 01:06:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:40.343 01:06:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:40.343 01:06:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:22:40.602 01:06:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:40.602 01:06:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:40.602 01:06:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:40.602 01:06:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:40.602 01:06:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:42.503 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:42.503 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:42.503 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:22:42.503 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:42.503 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:42.503 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:42.503 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.503 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:22:42.761 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:42.761 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:42.761 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:42.761 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:42.761 01:06:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:44.663 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:44.663 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:44.663 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:22:44.922 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:44.922 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:44.922 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:44.922 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.922 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:22:44.922 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:44.922 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:44.922 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:44.922 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:44.922 01:06:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:47.452 01:06:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:49.356 01:06:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:49.356 01:06:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:49.356 01:06:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:22:49.356 01:06:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:49.356 01:06:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:49.356 01:06:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:49.356 01:06:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:49.356 01:06:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:22:49.614 01:06:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:49.614 01:06:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:49.614 01:06:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:49.614 01:06:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:49.614 01:06:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:51.517 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:51.517 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:51.517 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:22:51.517 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:51.517 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:51.517 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:51.517 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.517 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:22:51.776 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:51.776 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:51.776 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:51.776 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:51.776 01:06:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:53.683 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:53.683 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:53.683 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:22:53.683 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:53.683 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:53.683 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:53.683 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.683 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:22:53.942 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:53.942 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:53.942 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:53.942 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:53.942 01:07:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:56.473 01:07:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:58.373 01:07:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:58.373 01:07:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:58.373 01:07:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:22:58.373 01:07:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:58.373 01:07:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:58.373 01:07:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:58.373 01:07:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:58.373 01:07:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:22:58.631 01:07:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:58.631 01:07:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:58.631 01:07:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:58.631 01:07:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:58.631 01:07:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:00.532 01:07:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:00.532 01:07:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:00.532 01:07:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:23:00.532 01:07:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:00.532 01:07:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:00.532 01:07:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:00.532 01:07:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:00.532 [global] 00:23:00.532 thread=1 00:23:00.532 invalidate=1 00:23:00.532 rw=read 00:23:00.532 time_based=1 00:23:00.532 runtime=10 00:23:00.532 ioengine=libaio 00:23:00.532 direct=1 00:23:00.532 bs=262144 00:23:00.532 iodepth=64 00:23:00.532 norandommap=1 00:23:00.532 numjobs=1 00:23:00.532 00:23:00.532 [job0] 00:23:00.532 filename=/dev/nvme0n1 00:23:00.532 [job1] 00:23:00.532 filename=/dev/nvme10n1 00:23:00.532 [job2] 00:23:00.532 filename=/dev/nvme11n1 00:23:00.532 [job3] 00:23:00.532 filename=/dev/nvme2n1 00:23:00.532 [job4] 00:23:00.532 filename=/dev/nvme3n1 00:23:00.532 [job5] 00:23:00.532 filename=/dev/nvme4n1 00:23:00.532 [job6] 00:23:00.532 filename=/dev/nvme5n1 00:23:00.532 [job7] 00:23:00.532 filename=/dev/nvme6n1 00:23:00.532 [job8] 00:23:00.532 filename=/dev/nvme7n1 00:23:00.532 [job9] 00:23:00.532 filename=/dev/nvme8n1 00:23:00.532 [job10] 00:23:00.532 filename=/dev/nvme9n1 00:23:00.791 Could not set queue depth (nvme0n1) 00:23:00.791 Could not set queue depth (nvme10n1) 00:23:00.791 Could not set queue depth (nvme11n1) 00:23:00.791 Could not set queue depth (nvme2n1) 00:23:00.791 Could not set queue depth (nvme3n1) 00:23:00.791 Could not set queue depth (nvme4n1) 00:23:00.791 Could not set queue depth (nvme5n1) 00:23:00.791 Could not set queue depth (nvme6n1) 00:23:00.791 Could not set queue depth (nvme7n1) 00:23:00.791 Could not set queue depth (nvme8n1) 00:23:00.791 Could not set queue depth (nvme9n1) 00:23:01.049 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:01.049 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:01.049 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:01.049 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:01.049 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:01.049 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:01.049 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:01.049 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:01.049 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:01.049 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:01.049 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:01.049 fio-3.35 00:23:01.049 Starting 11 threads 00:23:13.257 00:23:13.257 job0: (groupid=0, jobs=1): err= 0: pid=401631: Tue Nov 19 01:07:17 2024 00:23:13.257 read: IOPS=1086, BW=272MiB/s (285MB/s)(2729MiB/10045msec) 00:23:13.257 slat (usec): min=11, max=15349, avg=908.51, stdev=2183.69 00:23:13.257 clat (msec): min=12, max=104, avg=57.94, stdev= 9.24 00:23:13.257 lat (msec): min=12, max=107, avg=58.85, stdev= 9.53 00:23:13.257 clat percentiles (msec): 00:23:13.257 | 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 49], 00:23:13.257 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 61], 00:23:13.257 | 70.00th=[ 62], 80.00th=[ 63], 90.00th=[ 67], 95.00th=[ 74], 00:23:13.257 | 99.00th=[ 90], 99.50th=[ 92], 99.90th=[ 97], 99.95th=[ 104], 00:23:13.257 | 99.99th=[ 105] 00:23:13.257 bw ( KiB/s): min=192000, max=363008, per=6.77%, avg=277785.60, stdev=40254.81, samples=20 00:23:13.257 iops : min= 750, max= 1418, avg=1085.10, stdev=157.25, samples=20 00:23:13.257 lat (msec) : 20=0.15%, 50=21.22%, 100=78.57%, 250=0.06% 00:23:13.257 cpu : usr=0.33%, sys=4.34%, ctx=2268, majf=0, minf=4097 00:23:13.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:13.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.257 issued rwts: total=10914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.257 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.257 job1: (groupid=0, jobs=1): err= 0: pid=401632: Tue Nov 19 01:07:17 2024 00:23:13.257 read: IOPS=2686, BW=672MiB/s (704MB/s)(6732MiB/10021msec) 00:23:13.257 slat (usec): min=10, max=17887, avg=359.69, stdev=1085.57 00:23:13.257 clat (usec): min=950, max=102248, avg=23438.47, stdev=17601.27 00:23:13.257 lat (usec): min=983, max=105986, avg=23798.16, stdev=17884.66 00:23:13.257 clat percentiles (usec): 00:23:13.257 | 1.00th=[ 6652], 5.00th=[13566], 10.00th=[13960], 20.00th=[14222], 00:23:13.257 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14877], 60.00th=[15270], 00:23:13.257 | 70.00th=[15664], 80.00th=[40633], 90.00th=[53216], 95.00th=[64226], 00:23:13.257 | 99.00th=[78119], 99.50th=[86508], 99.90th=[91751], 99.95th=[91751], 00:23:13.257 | 99.99th=[99091] 00:23:13.257 bw ( KiB/s): min=194048, max=1105408, per=16.76%, avg=687692.80, stdev=397905.03, samples=20 00:23:13.257 iops : min= 758, max= 4318, avg=2686.30, stdev=1554.32, samples=20 00:23:13.257 lat (usec) : 1000=0.01% 00:23:13.257 lat (msec) : 2=0.28%, 4=0.38%, 10=1.13%, 20=74.87%, 50=10.70% 00:23:13.257 lat (msec) : 100=12.63%, 250=0.01% 00:23:13.257 cpu : usr=0.49%, sys=6.33%, ctx=6800, majf=0, minf=4097 00:23:13.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:13.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.257 issued rwts: total=26926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.257 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.257 job2: (groupid=0, jobs=1): err= 0: pid=401634: Tue Nov 19 01:07:17 2024 00:23:13.257 read: IOPS=1238, BW=310MiB/s (325MB/s)(3108MiB/10037msec) 00:23:13.257 slat (usec): min=10, max=31856, avg=768.23, stdev=1907.75 00:23:13.257 clat (msec): min=13, max=103, avg=50.84, stdev=10.75 00:23:13.257 lat (msec): min=13, max=103, avg=51.61, stdev=10.96 00:23:13.257 clat percentiles (msec): 00:23:13.257 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 45], 00:23:13.257 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:23:13.257 | 70.00th=[ 53], 80.00th=[ 61], 90.00th=[ 68], 95.00th=[ 74], 00:23:13.257 | 99.00th=[ 89], 99.50th=[ 90], 99.90th=[ 99], 99.95th=[ 101], 00:23:13.257 | 99.99th=[ 103] 00:23:13.257 bw ( KiB/s): min=182272, max=385024, per=7.72%, avg=316646.40, stdev=57593.56, samples=20 00:23:13.257 iops : min= 712, max= 1504, avg=1236.90, stdev=224.97, samples=20 00:23:13.257 lat (msec) : 20=0.26%, 50=67.46%, 100=32.22%, 250=0.06% 00:23:13.257 cpu : usr=0.35%, sys=4.38%, ctx=3300, majf=0, minf=4097 00:23:13.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:13.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.258 issued rwts: total=12432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.258 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.258 job3: (groupid=0, jobs=1): err= 0: pid=401635: Tue Nov 19 01:07:17 2024 00:23:13.258 read: IOPS=1519, BW=380MiB/s (398MB/s)(3808MiB/10024msec) 00:23:13.258 slat (usec): min=10, max=13048, avg=638.85, stdev=1495.38 00:23:13.258 clat (usec): min=11298, max=65305, avg=41445.10, stdev=7802.59 00:23:13.258 lat (usec): min=11518, max=66371, avg=42083.94, stdev=8016.73 00:23:13.258 clat percentiles (usec): 00:23:13.258 | 1.00th=[21365], 5.00th=[28443], 10.00th=[29230], 20.00th=[32375], 00:23:13.258 | 30.00th=[40633], 40.00th=[42206], 50.00th=[43254], 60.00th=[43779], 00:23:13.258 | 70.00th=[44303], 80.00th=[45351], 90.00th=[52691], 95.00th=[53740], 00:23:13.258 | 99.00th=[56361], 99.50th=[58459], 99.90th=[63701], 99.95th=[63701], 00:23:13.258 | 99.99th=[64750] 00:23:13.258 bw ( KiB/s): min=297472, max=549888, per=9.46%, avg=388275.20, stdev=60927.72, samples=20 00:23:13.258 iops : min= 1162, max= 2148, avg=1516.70, stdev=238.00, samples=20 00:23:13.258 lat (msec) : 20=0.83%, 50=86.32%, 100=12.84% 00:23:13.258 cpu : usr=0.27%, sys=4.48%, ctx=3753, majf=0, minf=4097 00:23:13.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:13.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.258 issued rwts: total=15230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.258 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.258 job4: (groupid=0, jobs=1): err= 0: pid=401636: Tue Nov 19 01:07:17 2024 00:23:13.258 read: IOPS=1343, BW=336MiB/s (352MB/s)(3368MiB/10029msec) 00:23:13.258 slat (usec): min=9, max=36000, avg=693.44, stdev=1837.91 00:23:13.258 clat (msec): min=9, max=104, avg=46.89, stdev= 9.85 00:23:13.258 lat (msec): min=10, max=120, avg=47.59, stdev=10.05 00:23:13.258 clat percentiles (msec): 00:23:13.258 | 1.00th=[ 29], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 43], 00:23:13.258 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:23:13.258 | 70.00th=[ 46], 80.00th=[ 54], 90.00th=[ 58], 95.00th=[ 70], 00:23:13.258 | 99.00th=[ 88], 99.50th=[ 90], 99.90th=[ 97], 99.95th=[ 101], 00:23:13.258 | 99.99th=[ 105] 00:23:13.258 bw ( KiB/s): min=222720, max=402432, per=8.37%, avg=343270.40, stdev=50395.90, samples=20 00:23:13.258 iops : min= 870, max= 1572, avg=1340.90, stdev=196.86, samples=20 00:23:13.258 lat (msec) : 10=0.01%, 20=0.30%, 50=74.55%, 100=25.09%, 250=0.05% 00:23:13.258 cpu : usr=0.42%, sys=4.92%, ctx=3780, majf=0, minf=4097 00:23:13.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:13.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.258 issued rwts: total=13472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.258 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.258 job5: (groupid=0, jobs=1): err= 0: pid=401637: Tue Nov 19 01:07:17 2024 00:23:13.258 read: IOPS=1279, BW=320MiB/s (336MB/s)(3211MiB/10035msec) 00:23:13.258 slat (usec): min=10, max=25078, avg=775.37, stdev=1864.77 00:23:13.258 clat (usec): min=11162, max=88783, avg=49184.30, stdev=8186.85 00:23:13.258 lat (usec): min=11369, max=96466, avg=49959.67, stdev=8439.23 00:23:13.258 clat percentiles (usec): 00:23:13.258 | 1.00th=[40109], 5.00th=[41157], 10.00th=[42206], 20.00th=[44303], 00:23:13.258 | 30.00th=[44827], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:23:13.258 | 70.00th=[49546], 80.00th=[55837], 90.00th=[61080], 95.00th=[67634], 00:23:13.258 | 99.00th=[73925], 99.50th=[76022], 99.90th=[80217], 99.95th=[81265], 00:23:13.258 | 99.99th=[88605] 00:23:13.258 bw ( KiB/s): min=248320, max=384512, per=7.98%, avg=327193.60, stdev=44185.42, samples=20 00:23:13.258 iops : min= 970, max= 1502, avg=1278.10, stdev=172.60, samples=20 00:23:13.258 lat (msec) : 20=0.16%, 50=70.76%, 100=29.09% 00:23:13.258 cpu : usr=0.46%, sys=4.74%, ctx=2652, majf=0, minf=4097 00:23:13.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:13.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.258 issued rwts: total=12844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.258 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.258 job6: (groupid=0, jobs=1): err= 0: pid=401638: Tue Nov 19 01:07:17 2024 00:23:13.258 read: IOPS=1088, BW=272MiB/s (285MB/s)(2733MiB/10045msec) 00:23:13.258 slat (usec): min=9, max=18751, avg=907.32, stdev=2175.37 00:23:13.258 clat (msec): min=12, max=102, avg=57.84, stdev= 9.12 00:23:13.258 lat (msec): min=13, max=105, avg=58.75, stdev= 9.42 00:23:13.258 clat percentiles (msec): 00:23:13.258 | 1.00th=[ 42], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 48], 00:23:13.258 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 61], 00:23:13.258 | 70.00th=[ 62], 80.00th=[ 62], 90.00th=[ 67], 95.00th=[ 74], 00:23:13.258 | 99.00th=[ 89], 99.50th=[ 91], 99.90th=[ 100], 99.95th=[ 103], 00:23:13.258 | 99.99th=[ 103] 00:23:13.258 bw ( KiB/s): min=191871, max=364032, per=6.78%, avg=278291.15, stdev=40581.99, samples=20 00:23:13.258 iops : min= 749, max= 1422, avg=1087.05, stdev=158.58, samples=20 00:23:13.258 lat (msec) : 20=0.10%, 50=21.49%, 100=78.35%, 250=0.05% 00:23:13.258 cpu : usr=0.48%, sys=4.30%, ctx=2297, majf=0, minf=4097 00:23:13.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:13.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.258 issued rwts: total=10933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.258 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.258 job7: (groupid=0, jobs=1): err= 0: pid=401639: Tue Nov 19 01:07:17 2024 00:23:13.258 read: IOPS=1760, BW=440MiB/s (461MB/s)(4421MiB/10045msec) 00:23:13.258 slat (usec): min=10, max=34710, avg=562.95, stdev=1846.92 00:23:13.258 clat (msec): min=7, max=110, avg=35.76, stdev=21.92 00:23:13.258 lat (msec): min=7, max=110, avg=36.32, stdev=22.31 00:23:13.258 clat percentiles (msec): 00:23:13.258 | 1.00th=[ 14], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 15], 00:23:13.258 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 56], 00:23:13.258 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 62], 95.00th=[ 63], 00:23:13.258 | 99.00th=[ 70], 99.50th=[ 73], 99.90th=[ 90], 99.95th=[ 97], 00:23:13.258 | 99.99th=[ 102] 00:23:13.258 bw ( KiB/s): min=256512, max=1101312, per=11.00%, avg=451097.60, stdev=327467.45, samples=20 00:23:13.258 iops : min= 1002, max= 4302, avg=1762.10, stdev=1279.17, samples=20 00:23:13.258 lat (msec) : 10=0.15%, 20=50.60%, 50=7.14%, 100=42.07%, 250=0.03% 00:23:13.258 cpu : usr=0.42%, sys=5.04%, ctx=3827, majf=0, minf=3722 00:23:13.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:13.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.258 issued rwts: total=17684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.258 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.258 job8: (groupid=0, jobs=1): err= 0: pid=401640: Tue Nov 19 01:07:17 2024 00:23:13.258 read: IOPS=1268, BW=317MiB/s (333MB/s)(3182MiB/10030msec) 00:23:13.258 slat (usec): min=10, max=26034, avg=756.95, stdev=1933.27 00:23:13.258 clat (usec): min=12588, max=94762, avg=49631.39, stdev=10429.01 00:23:13.258 lat (msec): min=12, max=100, avg=50.39, stdev=10.68 00:23:13.258 clat percentiles (usec): 00:23:13.258 | 1.00th=[29230], 5.00th=[37487], 10.00th=[42206], 20.00th=[42730], 00:23:13.258 | 30.00th=[43254], 40.00th=[43779], 50.00th=[44827], 60.00th=[50070], 00:23:13.258 | 70.00th=[53740], 80.00th=[57410], 90.00th=[63177], 95.00th=[69731], 00:23:13.258 | 99.00th=[87557], 99.50th=[89654], 99.90th=[92799], 99.95th=[93848], 00:23:13.258 | 99.99th=[93848] 00:23:13.258 bw ( KiB/s): min=244736, max=417280, per=7.90%, avg=324198.40, stdev=56984.28, samples=20 00:23:13.258 iops : min= 956, max= 1630, avg=1266.40, stdev=222.59, samples=20 00:23:13.258 lat (msec) : 20=0.09%, 50=60.01%, 100=39.90% 00:23:13.258 cpu : usr=0.34%, sys=4.63%, ctx=2962, majf=0, minf=4097 00:23:13.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:13.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.258 issued rwts: total=12727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.258 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.258 job9: (groupid=0, jobs=1): err= 0: pid=401641: Tue Nov 19 01:07:17 2024 00:23:13.258 read: IOPS=1487, BW=372MiB/s (390MB/s)(3736MiB/10044msec) 00:23:13.258 slat (usec): min=9, max=19540, avg=661.24, stdev=1963.62 00:23:13.258 clat (usec): min=392, max=102891, avg=42317.37, stdev=19503.94 00:23:13.258 lat (usec): min=458, max=102939, avg=42978.61, stdev=19872.05 00:23:13.258 clat percentiles (msec): 00:23:13.258 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:23:13.258 | 30.00th=[ 30], 40.00th=[ 40], 50.00th=[ 46], 60.00th=[ 57], 00:23:13.258 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 62], 95.00th=[ 64], 00:23:13.258 | 99.00th=[ 71], 99.50th=[ 75], 99.90th=[ 83], 99.95th=[ 85], 00:23:13.258 | 99.99th=[ 104] 00:23:13.258 bw ( KiB/s): min=258048, max=997376, per=9.29%, avg=381017.35, stdev=212141.46, samples=20 00:23:13.258 iops : min= 1008, max= 3896, avg=1488.30, stdev=828.55, samples=20 00:23:13.258 lat (usec) : 500=0.01%, 1000=0.01% 00:23:13.259 lat (msec) : 2=0.11%, 4=1.19%, 10=1.13%, 20=20.60%, 50=27.68% 00:23:13.259 lat (msec) : 100=49.25%, 250=0.02% 00:23:13.259 cpu : usr=0.32%, sys=4.62%, ctx=3708, majf=0, minf=4097 00:23:13.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:13.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.259 issued rwts: total=14943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.259 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.259 job10: (groupid=0, jobs=1): err= 0: pid=401642: Tue Nov 19 01:07:17 2024 00:23:13.259 read: IOPS=1281, BW=320MiB/s (336MB/s)(3216MiB/10036msec) 00:23:13.259 slat (usec): min=10, max=14177, avg=774.09, stdev=1800.14 00:23:13.259 clat (usec): min=11007, max=85888, avg=49112.31, stdev=8222.39 00:23:13.259 lat (usec): min=11223, max=87941, avg=49886.40, stdev=8469.43 00:23:13.259 clat percentiles (usec): 00:23:13.259 | 1.00th=[40109], 5.00th=[41157], 10.00th=[42206], 20.00th=[43779], 00:23:13.259 | 30.00th=[44827], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:23:13.259 | 70.00th=[49021], 80.00th=[55837], 90.00th=[61080], 95.00th=[67634], 00:23:13.259 | 99.00th=[73925], 99.50th=[76022], 99.90th=[82314], 99.95th=[83362], 00:23:13.259 | 99.99th=[85459] 00:23:13.259 bw ( KiB/s): min=238080, max=386048, per=7.99%, avg=327680.00, stdev=44754.90, samples=20 00:23:13.259 iops : min= 930, max= 1508, avg=1280.00, stdev=174.82, samples=20 00:23:13.259 lat (msec) : 20=0.18%, 50=71.60%, 100=28.22% 00:23:13.259 cpu : usr=0.43%, sys=5.08%, ctx=2625, majf=0, minf=4097 00:23:13.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:13.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.259 issued rwts: total=12863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.259 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.259 00:23:13.259 Run status group 0 (all jobs): 00:23:13.259 READ: bw=4006MiB/s (4201MB/s), 272MiB/s-672MiB/s (285MB/s-704MB/s), io=39.3GiB (42.2GB), run=10021-10045msec 00:23:13.259 00:23:13.259 Disk stats (read/write): 00:23:13.259 nvme0n1: ios=21700/0, merge=0/0, ticks=1228964/0, in_queue=1228964, util=97.82% 00:23:13.259 nvme10n1: ios=53712/0, merge=0/0, ticks=1227110/0, in_queue=1227110, util=97.97% 00:23:13.259 nvme11n1: ios=24737/0, merge=0/0, ticks=1230367/0, in_queue=1230367, util=98.07% 00:23:13.259 nvme2n1: ios=30319/0, merge=0/0, ticks=1229398/0, in_queue=1229398, util=98.16% 00:23:13.259 nvme3n1: ios=26817/0, merge=0/0, ticks=1231238/0, in_queue=1231238, util=98.21% 00:23:13.259 nvme4n1: ios=25553/0, merge=0/0, ticks=1229342/0, in_queue=1229342, util=98.49% 00:23:13.259 nvme5n1: ios=21728/0, merge=0/0, ticks=1230819/0, in_queue=1230819, util=98.60% 00:23:13.259 nvme6n1: ios=35245/0, merge=0/0, ticks=1227684/0, in_queue=1227684, util=98.71% 00:23:13.259 nvme7n1: ios=25317/0, merge=0/0, ticks=1231869/0, in_queue=1231869, util=99.01% 00:23:13.259 nvme8n1: ios=29758/0, merge=0/0, ticks=1227221/0, in_queue=1227221, util=99.15% 00:23:13.259 nvme9n1: ios=25600/0, merge=0/0, ticks=1231410/0, in_queue=1231410, util=99.26% 00:23:13.259 01:07:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:13.259 [global] 00:23:13.259 thread=1 00:23:13.259 invalidate=1 00:23:13.259 rw=randwrite 00:23:13.259 time_based=1 00:23:13.259 runtime=10 00:23:13.259 ioengine=libaio 00:23:13.259 direct=1 00:23:13.259 bs=262144 00:23:13.259 iodepth=64 00:23:13.259 norandommap=1 00:23:13.259 numjobs=1 00:23:13.259 00:23:13.259 [job0] 00:23:13.259 filename=/dev/nvme0n1 00:23:13.259 [job1] 00:23:13.259 filename=/dev/nvme10n1 00:23:13.259 [job2] 00:23:13.259 filename=/dev/nvme11n1 00:23:13.259 [job3] 00:23:13.259 filename=/dev/nvme2n1 00:23:13.259 [job4] 00:23:13.259 filename=/dev/nvme3n1 00:23:13.259 [job5] 00:23:13.259 filename=/dev/nvme4n1 00:23:13.259 [job6] 00:23:13.259 filename=/dev/nvme5n1 00:23:13.259 [job7] 00:23:13.259 filename=/dev/nvme6n1 00:23:13.259 [job8] 00:23:13.259 filename=/dev/nvme7n1 00:23:13.259 [job9] 00:23:13.259 filename=/dev/nvme8n1 00:23:13.259 [job10] 00:23:13.259 filename=/dev/nvme9n1 00:23:13.259 Could not set queue depth (nvme0n1) 00:23:13.259 Could not set queue depth (nvme10n1) 00:23:13.259 Could not set queue depth (nvme11n1) 00:23:13.259 Could not set queue depth (nvme2n1) 00:23:13.259 Could not set queue depth (nvme3n1) 00:23:13.259 Could not set queue depth (nvme4n1) 00:23:13.259 Could not set queue depth (nvme5n1) 00:23:13.259 Could not set queue depth (nvme6n1) 00:23:13.259 Could not set queue depth (nvme7n1) 00:23:13.259 Could not set queue depth (nvme8n1) 00:23:13.259 Could not set queue depth (nvme9n1) 00:23:13.259 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.259 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.259 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.259 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.259 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.259 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.259 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.259 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.259 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.259 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.259 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.259 fio-3.35 00:23:13.259 Starting 11 threads 00:23:23.232 00:23:23.232 job0: (groupid=0, jobs=1): err= 0: pid=403164: Tue Nov 19 01:07:28 2024 00:23:23.232 write: IOPS=2133, BW=533MiB/s (559MB/s)(5343MiB/10015msec); 0 zone resets 00:23:23.232 slat (usec): min=15, max=11894, avg=465.80, stdev=1171.57 00:23:23.232 clat (usec): min=5172, max=70398, avg=29513.71, stdev=10136.65 00:23:23.232 lat (usec): min=5248, max=70463, avg=29979.51, stdev=10325.29 00:23:23.232 clat percentiles (usec): 00:23:23.232 | 1.00th=[17433], 5.00th=[17957], 10.00th=[18482], 20.00th=[19006], 00:23:23.232 | 30.00th=[19530], 40.00th=[20055], 50.00th=[35914], 60.00th=[37487], 00:23:23.232 | 70.00th=[38011], 80.00th=[39060], 90.00th=[40633], 95.00th=[42206], 00:23:23.232 | 99.00th=[47449], 99.50th=[50594], 99.90th=[58983], 99.95th=[59507], 00:23:23.232 | 99.99th=[60556] 00:23:23.232 bw ( KiB/s): min=388096, max=835072, per=16.12%, avg=545510.40, stdev=191998.42, samples=20 00:23:23.232 iops : min= 1516, max= 3262, avg=2130.90, stdev=749.99, samples=20 00:23:23.232 lat (msec) : 10=0.02%, 20=38.78%, 50=60.58%, 100=0.62% 00:23:23.232 cpu : usr=3.62%, sys=5.30%, ctx=3656, majf=0, minf=1 00:23:23.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:23:23.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.232 issued rwts: total=0,21372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.232 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.232 job1: (groupid=0, jobs=1): err= 0: pid=403183: Tue Nov 19 01:07:28 2024 00:23:23.232 write: IOPS=1213, BW=303MiB/s (318MB/s)(3049MiB/10049msec); 0 zone resets 00:23:23.232 slat (usec): min=21, max=47498, avg=702.29, stdev=2119.70 00:23:23.232 clat (msec): min=2, max=134, avg=52.01, stdev=18.34 00:23:23.232 lat (msec): min=2, max=136, avg=52.71, stdev=18.58 00:23:23.232 clat percentiles (msec): 00:23:23.233 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 38], 00:23:23.233 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 42], 60.00th=[ 56], 00:23:23.233 | 70.00th=[ 58], 80.00th=[ 69], 90.00th=[ 85], 95.00th=[ 90], 00:23:23.233 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 112], 99.95th=[ 115], 00:23:23.233 | 99.99th=[ 130] 00:23:23.233 bw ( KiB/s): min=185856, max=421376, per=9.18%, avg=310630.40, stdev=90937.02, samples=20 00:23:23.233 iops : min= 726, max= 1646, avg=1213.40, stdev=355.22, samples=20 00:23:23.233 lat (msec) : 4=0.02%, 10=0.17%, 20=0.36%, 50=54.98%, 100=44.21% 00:23:23.233 lat (msec) : 250=0.26% 00:23:23.233 cpu : usr=5.46%, sys=3.58%, ctx=2838, majf=0, minf=1 00:23:23.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:23.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.233 issued rwts: total=0,12197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.233 job2: (groupid=0, jobs=1): err= 0: pid=403189: Tue Nov 19 01:07:28 2024 00:23:23.233 write: IOPS=870, BW=218MiB/s (228MB/s)(2190MiB/10059msec); 0 zone resets 00:23:23.233 slat (usec): min=22, max=31868, avg=1117.42, stdev=2989.76 00:23:23.233 clat (msec): min=5, max=129, avg=72.35, stdev=16.13 00:23:23.233 lat (msec): min=5, max=129, avg=73.47, stdev=16.57 00:23:23.233 clat percentiles (msec): 00:23:23.233 | 1.00th=[ 53], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:23:23.233 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 75], 00:23:23.233 | 70.00th=[ 79], 80.00th=[ 93], 90.00th=[ 96], 95.00th=[ 99], 00:23:23.233 | 99.00th=[ 107], 99.50th=[ 112], 99.90th=[ 126], 99.95th=[ 128], 00:23:23.233 | 99.99th=[ 130] 00:23:23.233 bw ( KiB/s): min=161792, max=282112, per=6.58%, avg=222617.60, stdev=43867.38, samples=20 00:23:23.233 iops : min= 632, max= 1102, avg=869.60, stdev=171.36, samples=20 00:23:23.233 lat (msec) : 10=0.05%, 20=0.10%, 50=0.43%, 100=96.73%, 250=2.68% 00:23:23.233 cpu : usr=2.03%, sys=2.96%, ctx=1879, majf=0, minf=1 00:23:23.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:23.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.233 issued rwts: total=0,8759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.233 job3: (groupid=0, jobs=1): err= 0: pid=403193: Tue Nov 19 01:07:28 2024 00:23:23.233 write: IOPS=886, BW=222MiB/s (233MB/s)(2228MiB/10046msec); 0 zone resets 00:23:23.233 slat (usec): min=25, max=39044, avg=1096.16, stdev=3188.28 00:23:23.233 clat (msec): min=24, max=132, avg=71.04, stdev=16.24 00:23:23.233 lat (msec): min=24, max=133, avg=72.13, stdev=16.73 00:23:23.233 clat percentiles (msec): 00:23:23.233 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:23:23.233 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 74], 00:23:23.233 | 70.00th=[ 79], 80.00th=[ 93], 90.00th=[ 96], 95.00th=[ 99], 00:23:23.233 | 99.00th=[ 102], 99.50th=[ 109], 99.90th=[ 126], 99.95th=[ 127], 00:23:23.233 | 99.99th=[ 133] 00:23:23.233 bw ( KiB/s): min=165376, max=289792, per=6.69%, avg=226483.20, stdev=46351.45, samples=20 00:23:23.233 iops : min= 646, max= 1132, avg=884.70, stdev=181.06, samples=20 00:23:23.233 lat (msec) : 50=1.10%, 100=96.98%, 250=1.92% 00:23:23.233 cpu : usr=2.20%, sys=3.02%, ctx=1913, majf=0, minf=1 00:23:23.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:23.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.233 issued rwts: total=0,8910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.233 job4: (groupid=0, jobs=1): err= 0: pid=403196: Tue Nov 19 01:07:28 2024 00:23:23.233 write: IOPS=1047, BW=262MiB/s (275MB/s)(2634MiB/10059msec); 0 zone resets 00:23:23.233 slat (usec): min=22, max=25511, avg=946.44, stdev=2725.44 00:23:23.233 clat (msec): min=22, max=129, avg=60.14, stdev=15.17 00:23:23.233 lat (msec): min=22, max=129, avg=61.08, stdev=15.59 00:23:23.233 clat percentiles (msec): 00:23:23.233 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 54], 00:23:23.233 | 30.00th=[ 55], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 56], 00:23:23.233 | 70.00th=[ 64], 80.00th=[ 73], 90.00th=[ 87], 95.00th=[ 90], 00:23:23.233 | 99.00th=[ 97], 99.50th=[ 104], 99.90th=[ 115], 99.95th=[ 126], 00:23:23.233 | 99.99th=[ 130] 00:23:23.233 bw ( KiB/s): min=178176, max=440320, per=7.92%, avg=268062.20, stdev=63985.21, samples=20 00:23:23.233 iops : min= 696, max= 1720, avg=1047.10, stdev=249.96, samples=20 00:23:23.233 lat (msec) : 50=13.24%, 100=86.11%, 250=0.65% 00:23:23.233 cpu : usr=2.21%, sys=2.99%, ctx=2177, majf=0, minf=1 00:23:23.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:23.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.233 issued rwts: total=0,10535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.233 job5: (groupid=0, jobs=1): err= 0: pid=403205: Tue Nov 19 01:07:28 2024 00:23:23.233 write: IOPS=908, BW=227MiB/s (238MB/s)(2283MiB/10048msec); 0 zone resets 00:23:23.233 slat (usec): min=22, max=37099, avg=1088.36, stdev=3362.77 00:23:23.233 clat (msec): min=9, max=132, avg=69.32, stdev=17.41 00:23:23.233 lat (msec): min=9, max=133, avg=70.40, stdev=17.92 00:23:23.233 clat percentiles (msec): 00:23:23.233 | 1.00th=[ 37], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 57], 00:23:23.233 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 71], 00:23:23.233 | 70.00th=[ 78], 80.00th=[ 93], 90.00th=[ 96], 95.00th=[ 99], 00:23:23.233 | 99.00th=[ 103], 99.50th=[ 111], 99.90th=[ 130], 99.95th=[ 132], 00:23:23.233 | 99.99th=[ 133] 00:23:23.233 bw ( KiB/s): min=159744, max=342016, per=6.86%, avg=232115.20, stdev=54093.31, samples=20 00:23:23.233 iops : min= 624, max= 1336, avg=906.70, stdev=211.30, samples=20 00:23:23.233 lat (msec) : 10=0.04%, 20=0.08%, 50=4.96%, 100=93.18%, 250=1.74% 00:23:23.233 cpu : usr=2.02%, sys=2.85%, ctx=1845, majf=0, minf=1 00:23:23.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:23.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.233 issued rwts: total=0,9131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.233 job6: (groupid=0, jobs=1): err= 0: pid=403213: Tue Nov 19 01:07:28 2024 00:23:23.233 write: IOPS=1985, BW=496MiB/s (520MB/s)(4987MiB/10047msec); 0 zone resets 00:23:23.233 slat (usec): min=12, max=23281, avg=487.26, stdev=1414.74 00:23:23.233 clat (usec): min=1143, max=115013, avg=31735.16, stdev=15683.22 00:23:23.233 lat (usec): min=1213, max=115075, avg=32222.42, stdev=15947.80 00:23:23.233 clat percentiles (msec): 00:23:23.233 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 19], 00:23:23.233 | 30.00th=[ 20], 40.00th=[ 20], 50.00th=[ 24], 60.00th=[ 37], 00:23:23.234 | 70.00th=[ 39], 80.00th=[ 41], 90.00th=[ 57], 95.00th=[ 59], 00:23:23.234 | 99.00th=[ 75], 99.50th=[ 92], 99.90th=[ 96], 99.95th=[ 97], 00:23:23.234 | 99.99th=[ 114] 00:23:23.234 bw ( KiB/s): min=236032, max=864256, per=15.04%, avg=509004.80, stdev=238319.67, samples=20 00:23:23.234 iops : min= 922, max= 3376, avg=1988.30, stdev=930.94, samples=20 00:23:23.234 lat (msec) : 2=0.02%, 4=0.14%, 10=1.23%, 20=40.03%, 50=42.68% 00:23:23.234 lat (msec) : 100=15.89%, 250=0.03% 00:23:23.234 cpu : usr=3.87%, sys=5.26%, ctx=3306, majf=0, minf=1 00:23:23.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:23.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.234 issued rwts: total=0,19947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.234 job7: (groupid=0, jobs=1): err= 0: pid=403219: Tue Nov 19 01:07:28 2024 00:23:23.234 write: IOPS=845, BW=211MiB/s (222MB/s)(2124MiB/10048msec); 0 zone resets 00:23:23.234 slat (usec): min=23, max=42093, avg=1165.80, stdev=3418.91 00:23:23.234 clat (msec): min=31, max=129, avg=74.50, stdev=18.02 00:23:23.234 lat (msec): min=31, max=138, avg=75.67, stdev=18.53 00:23:23.234 clat percentiles (msec): 00:23:23.234 | 1.00th=[ 37], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 57], 00:23:23.234 | 30.00th=[ 59], 40.00th=[ 69], 50.00th=[ 75], 60.00th=[ 82], 00:23:23.234 | 70.00th=[ 90], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 99], 00:23:23.234 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 121], 99.95th=[ 126], 00:23:23.234 | 99.99th=[ 130] 00:23:23.234 bw ( KiB/s): min=161280, max=343552, per=6.38%, avg=215859.20, stdev=53763.05, samples=20 00:23:23.234 iops : min= 630, max= 1342, avg=843.20, stdev=210.01, samples=20 00:23:23.234 lat (msec) : 50=5.12%, 100=92.00%, 250=2.88% 00:23:23.234 cpu : usr=1.79%, sys=2.90%, ctx=1792, majf=0, minf=1 00:23:23.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:23.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.234 issued rwts: total=0,8495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.234 job8: (groupid=0, jobs=1): err= 0: pid=403238: Tue Nov 19 01:07:28 2024 00:23:23.234 write: IOPS=985, BW=246MiB/s (258MB/s)(2477MiB/10058msec); 0 zone resets 00:23:23.234 slat (usec): min=20, max=57164, avg=952.37, stdev=3105.97 00:23:23.234 clat (msec): min=4, max=146, avg=64.00, stdev=15.11 00:23:23.234 lat (msec): min=5, max=152, avg=64.95, stdev=15.55 00:23:23.234 clat percentiles (msec): 00:23:23.234 | 1.00th=[ 36], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:23:23.234 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 56], 60.00th=[ 58], 00:23:23.234 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 93], 00:23:23.234 | 99.00th=[ 97], 99.50th=[ 106], 99.90th=[ 124], 99.95th=[ 144], 00:23:23.234 | 99.99th=[ 148] 00:23:23.234 bw ( KiB/s): min=158720, max=332800, per=7.45%, avg=252032.00, stdev=53312.45, samples=20 00:23:23.234 iops : min= 620, max= 1300, avg=984.50, stdev=208.25, samples=20 00:23:23.234 lat (msec) : 10=0.06%, 20=0.06%, 50=3.76%, 100=95.41%, 250=0.71% 00:23:23.234 cpu : usr=1.98%, sys=3.01%, ctx=2204, majf=0, minf=1 00:23:23.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:23.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.234 issued rwts: total=0,9908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.234 job9: (groupid=0, jobs=1): err= 0: pid=403247: Tue Nov 19 01:07:28 2024 00:23:23.234 write: IOPS=1338, BW=335MiB/s (351MB/s)(3361MiB/10048msec); 0 zone resets 00:23:23.234 slat (usec): min=21, max=49905, avg=722.04, stdev=2422.94 00:23:23.234 clat (msec): min=5, max=142, avg=47.09, stdev=18.39 00:23:23.234 lat (msec): min=5, max=145, avg=47.81, stdev=18.79 00:23:23.234 clat percentiles (msec): 00:23:23.234 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 39], 00:23:23.234 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 41], 00:23:23.234 | 70.00th=[ 42], 80.00th=[ 50], 90.00th=[ 94], 95.00th=[ 96], 00:23:23.234 | 99.00th=[ 100], 99.50th=[ 102], 99.90th=[ 112], 99.95th=[ 138], 00:23:23.234 | 99.99th=[ 142] 00:23:23.234 bw ( KiB/s): min=159232, max=415744, per=10.12%, avg=342579.20, stdev=96778.46, samples=20 00:23:23.234 iops : min= 622, max= 1624, avg=1338.20, stdev=378.04, samples=20 00:23:23.234 lat (msec) : 10=0.07%, 20=0.17%, 50=79.89%, 100=19.17%, 250=0.69% 00:23:23.234 cpu : usr=2.84%, sys=4.11%, ctx=2707, majf=0, minf=1 00:23:23.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:23.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.234 issued rwts: total=0,13445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.234 job10: (groupid=0, jobs=1): err= 0: pid=403254: Tue Nov 19 01:07:28 2024 00:23:23.234 write: IOPS=1024, BW=256MiB/s (269MB/s)(2577MiB/10059msec); 0 zone resets 00:23:23.234 slat (usec): min=21, max=30074, avg=947.49, stdev=2872.60 00:23:23.234 clat (msec): min=31, max=119, avg=61.47, stdev=15.42 00:23:23.234 lat (msec): min=31, max=119, avg=62.42, stdev=15.83 00:23:23.234 clat percentiles (msec): 00:23:23.234 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 54], 00:23:23.234 | 30.00th=[ 55], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:23:23.234 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 88], 95.00th=[ 91], 00:23:23.234 | 99.00th=[ 100], 99.50th=[ 105], 99.90th=[ 115], 99.95th=[ 116], 00:23:23.234 | 99.99th=[ 121] 00:23:23.234 bw ( KiB/s): min=178688, max=434176, per=7.75%, avg=262272.00, stdev=62645.93, samples=20 00:23:23.234 iops : min= 698, max= 1696, avg=1024.50, stdev=244.71, samples=20 00:23:23.234 lat (msec) : 50=11.35%, 100=87.73%, 250=0.92% 00:23:23.234 cpu : usr=2.32%, sys=2.89%, ctx=2207, majf=0, minf=1 00:23:23.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:23.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.234 issued rwts: total=0,10309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.234 00:23:23.234 Run status group 0 (all jobs): 00:23:23.234 WRITE: bw=3306MiB/s (3466MB/s), 211MiB/s-533MiB/s (222MB/s-559MB/s), io=32.5GiB (34.9GB), run=10015-10059msec 00:23:23.234 00:23:23.234 Disk stats (read/write): 00:23:23.234 nvme0n1: ios=49/42441, merge=0/0, ticks=19/1234765, in_queue=1234784, util=97.65% 00:23:23.234 nvme10n1: ios=0/24215, merge=0/0, ticks=0/1236413, in_queue=1236413, util=97.77% 00:23:23.234 nvme11n1: ios=0/17354, merge=0/0, ticks=0/1232608, in_queue=1232608, util=97.87% 00:23:23.234 nvme2n1: ios=0/17643, merge=0/0, ticks=0/1234986, in_queue=1234986, util=98.00% 00:23:23.234 nvme3n1: ios=0/20868, merge=0/0, ticks=0/1232129, in_queue=1232129, util=98.03% 00:23:23.234 nvme4n1: ios=0/18056, merge=0/0, ticks=0/1232350, in_queue=1232350, util=98.32% 00:23:23.234 nvme5n1: ios=0/39683, merge=0/0, ticks=0/1231701, in_queue=1231701, util=98.45% 00:23:23.234 nvme6n1: ios=0/16801, merge=0/0, ticks=0/1233867, in_queue=1233867, util=98.53% 00:23:23.234 nvme7n1: ios=0/19626, merge=0/0, ticks=0/1234052, in_queue=1234052, util=98.84% 00:23:23.234 nvme8n1: ios=0/26698, merge=0/0, ticks=0/1233096, in_queue=1233096, util=98.98% 00:23:23.234 nvme9n1: ios=0/20467, merge=0/0, ticks=0/1234756, in_queue=1234756, util=99.08% 00:23:23.234 01:07:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:23:23.234 01:07:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:23:23.234 01:07:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.235 01:07:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:23.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.235 01:07:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:24.172 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:24.172 01:07:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:25.106 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:25.106 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.107 01:07:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:26.041 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:26.041 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:26.041 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:26.041 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:26.041 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:23:26.041 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:26.041 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:23:26.041 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:26.042 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:26.042 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.042 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:26.042 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.042 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:26.042 01:07:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:26.977 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:26.977 01:07:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:27.912 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.912 01:07:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:28.848 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:28.848 01:07:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:29.783 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:29.783 01:07:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:30.718 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:30.718 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.719 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:31.286 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:31.286 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:31.286 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:31.286 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:31.286 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:23:31.544 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:31.544 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:23:31.544 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:31.544 01:07:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:31.544 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.544 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:31.544 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.544 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.544 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:32.480 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:32.480 rmmod nvme_rdma 00:23:32.480 rmmod nvme_fabrics 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 396748 ']' 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 396748 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 396748 ']' 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 396748 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.480 01:07:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396748 00:23:32.480 01:07:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.480 01:07:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.480 01:07:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396748' 00:23:32.480 killing process with pid 396748 00:23:32.480 01:07:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 396748 00:23:32.480 01:07:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 396748 00:23:35.768 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:35.768 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:35.768 00:23:35.768 real 1m8.091s 00:23:35.768 user 4m21.665s 00:23:35.768 sys 0m17.024s 00:23:35.768 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:35.768 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:35.768 ************************************ 00:23:35.768 END TEST nvmf_multiconnection 00:23:35.768 ************************************ 00:23:35.768 01:07:42 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:35.768 01:07:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:35.768 01:07:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.768 01:07:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:35.768 ************************************ 00:23:35.768 START TEST nvmf_initiator_timeout 00:23:35.768 ************************************ 00:23:35.768 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:36.027 * Looking for test storage... 00:23:36.027 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:23:36.027 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:36.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.028 --rc genhtml_branch_coverage=1 00:23:36.028 --rc genhtml_function_coverage=1 00:23:36.028 --rc genhtml_legend=1 00:23:36.028 --rc geninfo_all_blocks=1 00:23:36.028 --rc geninfo_unexecuted_blocks=1 00:23:36.028 00:23:36.028 ' 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:36.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.028 --rc genhtml_branch_coverage=1 00:23:36.028 --rc genhtml_function_coverage=1 00:23:36.028 --rc genhtml_legend=1 00:23:36.028 --rc geninfo_all_blocks=1 00:23:36.028 --rc geninfo_unexecuted_blocks=1 00:23:36.028 00:23:36.028 ' 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:36.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.028 --rc genhtml_branch_coverage=1 00:23:36.028 --rc genhtml_function_coverage=1 00:23:36.028 --rc genhtml_legend=1 00:23:36.028 --rc geninfo_all_blocks=1 00:23:36.028 --rc geninfo_unexecuted_blocks=1 00:23:36.028 00:23:36.028 ' 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:36.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.028 --rc genhtml_branch_coverage=1 00:23:36.028 --rc genhtml_function_coverage=1 00:23:36.028 --rc genhtml_legend=1 00:23:36.028 --rc geninfo_all_blocks=1 00:23:36.028 --rc geninfo_unexecuted_blocks=1 00:23:36.028 00:23:36.028 ' 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.028 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.029 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.029 01:07:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:42.592 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:42.592 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@405 -- # modinfo irdma 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:42.592 Found net devices under 0000:af:00.0: cvl_0_0 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:42.592 Found net devices under 0000:af:00.1: cvl_0_1 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:42.592 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # rdma_device_init 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo cvl_0_0 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo cvl_0_1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:23:42.593 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:23:42.593 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:23:42.593 altname enp175s0f0np0 00:23:42.593 altname ens801f0np0 00:23:42.593 inet 192.168.100.8/24 scope global cvl_0_0 00:23:42.593 valid_lft forever preferred_lft forever 00:23:42.593 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:23:42.593 valid_lft forever preferred_lft forever 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:23:42.593 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:23:42.593 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:23:42.593 altname enp175s0f1np1 00:23:42.593 altname ens801f1np1 00:23:42.593 inet 192.168.100.9/24 scope global cvl_0_1 00:23:42.593 valid_lft forever preferred_lft forever 00:23:42.593 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:23:42.593 valid_lft forever preferred_lft forever 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo cvl_0_0 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo cvl_0_1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:42.593 192.168.100.9' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:42.593 192.168.100.9' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # head -n 1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:42.593 192.168.100.9' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # tail -n +2 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # head -n 1 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=409988 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 409988 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 409988 ']' 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.593 01:07:48 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:42.593 [2024-11-19 01:07:48.475642] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:42.593 [2024-11-19 01:07:48.475734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.593 [2024-11-19 01:07:48.602858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.593 [2024-11-19 01:07:48.710180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.593 [2024-11-19 01:07:48.710231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.593 [2024-11-19 01:07:48.710241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.593 [2024-11-19 01:07:48.710251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.593 [2024-11-19 01:07:48.710259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.594 [2024-11-19 01:07:48.712581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.594 [2024-11-19 01:07:48.712671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.594 [2024-11-19 01:07:48.712741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.594 [2024-11-19 01:07:48.712763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:42.851 Malloc0 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:42.851 Delay0 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.851 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:42.851 [2024-11-19 01:07:49.449410] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000029d40/0x617000007c40) succeed. 00:23:42.851 [2024-11-19 01:07:49.459245] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029ec0/0x617000007fc0) succeed. 00:23:42.852 [2024-11-19 01:07:49.459274] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:42.852 [2024-11-19 01:07:49.491676] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.852 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:43.110 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:43.110 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:23:43.110 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:43.110 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:43.110 01:07:49 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:23:45.640 01:07:51 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:45.640 01:07:51 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:45.640 01:07:51 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:23:45.640 01:07:51 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:45.640 01:07:51 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:45.640 01:07:51 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:23:45.640 01:07:51 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=410483 00:23:45.640 01:07:51 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:45.640 01:07:51 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:45.640 [global] 00:23:45.640 thread=1 00:23:45.640 invalidate=1 00:23:45.640 rw=write 00:23:45.640 time_based=1 00:23:45.640 runtime=60 00:23:45.640 ioengine=libaio 00:23:45.640 direct=1 00:23:45.640 bs=4096 00:23:45.640 iodepth=1 00:23:45.640 norandommap=0 00:23:45.640 numjobs=1 00:23:45.640 00:23:45.640 verify_dump=1 00:23:45.640 verify_backlog=512 00:23:45.640 verify_state_save=0 00:23:45.640 do_verify=1 00:23:45.640 verify=crc32c-intel 00:23:45.640 [job0] 00:23:45.640 filename=/dev/nvme0n1 00:23:45.640 Could not set queue depth (nvme0n1) 00:23:45.640 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:45.640 fio-3.35 00:23:45.640 Starting 1 thread 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:48.173 true 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:48.173 true 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:48.173 true 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:48.173 true 00:23:48.173 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.174 01:07:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:51.457 true 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:51.457 true 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:51.457 true 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:51.457 true 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:51.457 01:07:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 410483 00:24:47.669 00:24:47.669 job0: (groupid=0, jobs=1): err= 0: pid=410613: Tue Nov 19 01:08:52 2024 00:24:47.669 read: IOPS=1209, BW=4837KiB/s (4953kB/s)(283MiB/60000msec) 00:24:47.669 slat (nsec): min=6028, max=36948, avg=7434.51, stdev=943.06 00:24:47.669 clat (usec): min=95, max=709, avg=117.97, stdev= 7.73 00:24:47.669 lat (usec): min=106, max=717, avg=125.41, stdev= 7.76 00:24:47.669 clat percentiles (usec): 00:24:47.669 | 1.00th=[ 106], 5.00th=[ 110], 10.00th=[ 111], 20.00th=[ 113], 00:24:47.669 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 118], 60.00th=[ 120], 00:24:47.669 | 70.00th=[ 121], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 130], 00:24:47.669 | 99.00th=[ 137], 99.50th=[ 139], 99.90th=[ 153], 99.95th=[ 169], 00:24:47.669 | 99.99th=[ 347] 00:24:47.669 write: IOPS=1211, BW=4847KiB/s (4963kB/s)(284MiB/60000msec); 0 zone resets 00:24:47.669 slat (usec): min=5, max=14144, avg= 9.84, stdev=61.95 00:24:47.669 clat (usec): min=23, max=41473k, avg=686.04, stdev=153811.89 00:24:47.669 lat (usec): min=105, max=41473k, avg=695.87, stdev=153811.90 00:24:47.669 clat percentiles (usec): 00:24:47.669 | 1.00th=[ 104], 5.00th=[ 106], 10.00th=[ 109], 20.00th=[ 111], 00:24:47.669 | 30.00th=[ 113], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 117], 00:24:47.669 | 70.00th=[ 119], 80.00th=[ 121], 90.00th=[ 124], 95.00th=[ 127], 00:24:47.669 | 99.00th=[ 135], 99.50th=[ 137], 99.90th=[ 147], 99.95th=[ 155], 00:24:47.669 | 99.99th=[ 281] 00:24:47.669 bw ( KiB/s): min= 3312, max=16384, per=100.00%, avg=15362.59, stdev=2161.17, samples=37 00:24:47.669 iops : min= 828, max= 4096, avg=3840.65, stdev=540.29, samples=37 00:24:47.669 lat (usec) : 50=0.01%, 100=0.02%, 250=99.96%, 500=0.01%, 750=0.01% 00:24:47.669 lat (usec) : 1000=0.01% 00:24:47.669 lat (msec) : >=2000=0.01% 00:24:47.669 cpu : usr=1.40%, sys=2.77%, ctx=145268, majf=0, minf=105 00:24:47.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:47.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.669 issued rwts: total=72556,72704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:47.669 00:24:47.669 Run status group 0 (all jobs): 00:24:47.669 READ: bw=4837KiB/s (4953kB/s), 4837KiB/s-4837KiB/s (4953kB/s-4953kB/s), io=283MiB (297MB), run=60000-60000msec 00:24:47.669 WRITE: bw=4847KiB/s (4963kB/s), 4847KiB/s-4847KiB/s (4963kB/s-4963kB/s), io=284MiB (298MB), run=60000-60000msec 00:24:47.669 00:24:47.669 Disk stats (read/write): 00:24:47.669 nvme0n1: ios=72520/72266, merge=0/0, ticks=8108/8198, in_queue=16306, util=99.57% 00:24:47.669 01:08:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:47.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:47.669 nvmf hotplug test: fio successful as expected 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:47.669 rmmod nvme_rdma 00:24:47.669 rmmod nvme_fabrics 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 409988 ']' 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 409988 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 409988 ']' 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 409988 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 409988 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 409988' 00:24:47.669 killing process with pid 409988 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 409988 00:24:47.669 01:08:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 409988 00:24:47.929 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.929 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:47.929 00:24:47.929 real 1m12.189s 00:24:47.929 user 4m29.990s 00:24:47.929 sys 0m6.722s 00:24:47.929 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.929 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:47.929 ************************************ 00:24:47.929 END TEST nvmf_initiator_timeout 00:24:47.929 ************************************ 00:24:47.929 01:08:54 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:24:47.929 01:08:54 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:24:47.929 01:08:54 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:24:47.929 01:08:54 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:24:47.929 01:08:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:47.929 01:08:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.929 01:08:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:48.188 ************************************ 00:24:48.188 START TEST nvmf_srq_overwhelm 00:24:48.188 ************************************ 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:24:48.189 * Looking for test storage... 00:24:48.189 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lcov --version 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.189 --rc genhtml_branch_coverage=1 00:24:48.189 --rc genhtml_function_coverage=1 00:24:48.189 --rc genhtml_legend=1 00:24:48.189 --rc geninfo_all_blocks=1 00:24:48.189 --rc geninfo_unexecuted_blocks=1 00:24:48.189 00:24:48.189 ' 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.189 --rc genhtml_branch_coverage=1 00:24:48.189 --rc genhtml_function_coverage=1 00:24:48.189 --rc genhtml_legend=1 00:24:48.189 --rc geninfo_all_blocks=1 00:24:48.189 --rc geninfo_unexecuted_blocks=1 00:24:48.189 00:24:48.189 ' 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.189 --rc genhtml_branch_coverage=1 00:24:48.189 --rc genhtml_function_coverage=1 00:24:48.189 --rc genhtml_legend=1 00:24:48.189 --rc geninfo_all_blocks=1 00:24:48.189 --rc geninfo_unexecuted_blocks=1 00:24:48.189 00:24:48.189 ' 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.189 --rc genhtml_branch_coverage=1 00:24:48.189 --rc genhtml_function_coverage=1 00:24:48.189 --rc genhtml_legend=1 00:24:48.189 --rc geninfo_all_blocks=1 00:24:48.189 --rc geninfo_unexecuted_blocks=1 00:24:48.189 00:24:48.189 ' 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.189 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.190 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.190 01:08:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.759 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:54.760 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:54.760 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@405 -- # modinfo irdma 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:54.760 Found net devices under 0000:af:00.0: cvl_0_0 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:54.760 Found net devices under 0000:af:00.1: cvl_0_1 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo cvl_0_0 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo cvl_0_1 00:24:54.760 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:24:54.761 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:54.761 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:24:54.761 altname enp175s0f0np0 00:24:54.761 altname ens801f0np0 00:24:54.761 inet 192.168.100.8/24 scope global cvl_0_0 00:24:54.761 valid_lft forever preferred_lft forever 00:24:54.761 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:24:54.761 valid_lft forever preferred_lft forever 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:24:54.761 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:54.761 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:24:54.761 altname enp175s0f1np1 00:24:54.761 altname ens801f1np1 00:24:54.761 inet 192.168.100.9/24 scope global cvl_0_1 00:24:54.761 valid_lft forever preferred_lft forever 00:24:54.761 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:24:54.761 valid_lft forever preferred_lft forever 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo cvl_0_0 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo cvl_0_1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:54.761 192.168.100.9' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:54.761 192.168.100.9' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:54.761 192.168.100.9' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=423736 00:24:54.761 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 423736 00:24:54.762 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:54.762 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 423736 ']' 00:24:54.762 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.762 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.762 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.762 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.762 01:09:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:54.762 [2024-11-19 01:09:00.721464] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:54.762 [2024-11-19 01:09:00.721562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.762 [2024-11-19 01:09:00.849927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.762 [2024-11-19 01:09:00.957889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.762 [2024-11-19 01:09:00.957938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.762 [2024-11-19 01:09:00.957949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.762 [2024-11-19 01:09:00.957960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.762 [2024-11-19 01:09:00.957968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.762 [2024-11-19 01:09:00.960429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.762 [2024-11-19 01:09:00.960508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.762 [2024-11-19 01:09:00.960584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.762 [2024-11-19 01:09:00.960608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.021 [2024-11-19 01:09:01.589043] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:24:55.021 [2024-11-19 01:09:01.598614] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:24:55.021 [2024-11-19 01:09:01.598644] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.021 Malloc0 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.021 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.280 [2024-11-19 01:09:01.727508] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:55.280 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:55.539 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:24:55.539 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:55.539 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:55.539 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.539 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.539 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.539 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:55.539 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.539 01:09:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.539 Malloc1 00:24:55.539 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.539 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:55.539 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.539 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.539 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.539 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:55.539 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.539 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.539 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.539 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.798 Malloc2 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.798 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.057 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.317 Malloc3 00:24:56.317 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.317 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:56.317 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.317 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.317 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.317 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:24:56.317 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.317 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.317 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.317 01:09:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.576 Malloc4 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.576 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.835 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:57.095 Malloc5 00:24:57.095 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.095 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:57.095 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.095 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:57.095 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.095 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:24:57.095 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.095 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:57.095 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.095 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:24:57.353 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:24:57.353 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:24:57.353 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:57.353 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:24:57.353 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:57.353 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:24:57.353 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:24:57.353 01:09:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:24:57.353 [global] 00:24:57.353 thread=1 00:24:57.353 invalidate=1 00:24:57.353 rw=read 00:24:57.353 time_based=1 00:24:57.353 runtime=10 00:24:57.353 ioengine=libaio 00:24:57.353 direct=1 00:24:57.353 bs=1048576 00:24:57.353 iodepth=128 00:24:57.353 norandommap=1 00:24:57.353 numjobs=13 00:24:57.353 00:24:57.353 [job0] 00:24:57.353 filename=/dev/nvme0n1 00:24:57.353 [job1] 00:24:57.353 filename=/dev/nvme2n1 00:24:57.353 [job2] 00:24:57.353 filename=/dev/nvme3n1 00:24:57.353 [job3] 00:24:57.353 filename=/dev/nvme4n1 00:24:57.353 [job4] 00:24:57.353 filename=/dev/nvme5n1 00:24:57.353 [job5] 00:24:57.353 filename=/dev/nvme6n1 00:24:57.611 Could not set queue depth (nvme0n1) 00:24:57.611 Could not set queue depth (nvme2n1) 00:24:57.611 Could not set queue depth (nvme3n1) 00:24:57.611 Could not set queue depth (nvme4n1) 00:24:57.611 Could not set queue depth (nvme5n1) 00:24:57.611 Could not set queue depth (nvme6n1) 00:24:57.869 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.869 ... 00:24:57.869 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.869 ... 00:24:57.869 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.869 ... 00:24:57.869 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.869 ... 00:24:57.869 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.869 ... 00:24:57.869 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:57.869 ... 00:24:57.869 fio-3.35 00:24:57.869 Starting 78 threads 00:25:10.078 00:25:10.078 job0: (groupid=0, jobs=1): err= 0: pid=424724: Tue Nov 19 01:09:14 2024 00:25:10.078 read: IOPS=38, BW=38.2MiB/s (40.1MB/s)(387MiB/10129msec) 00:25:10.078 slat (usec): min=38, max=296386, avg=25868.78, stdev=40573.91 00:25:10.078 clat (msec): min=115, max=3870, avg=2873.79, stdev=698.99 00:25:10.078 lat (msec): min=194, max=3872, avg=2899.66, stdev=693.47 00:25:10.078 clat percentiles (msec): 00:25:10.078 | 1.00th=[ 609], 5.00th=[ 1536], 10.00th=[ 1838], 20.00th=[ 2400], 00:25:10.078 | 30.00th=[ 2668], 40.00th=[ 2769], 50.00th=[ 2970], 60.00th=[ 3104], 00:25:10.078 | 70.00th=[ 3373], 80.00th=[ 3540], 90.00th=[ 3641], 95.00th=[ 3675], 00:25:10.078 | 99.00th=[ 3809], 99.50th=[ 3842], 99.90th=[ 3876], 99.95th=[ 3876], 00:25:10.078 | 99.99th=[ 3876] 00:25:10.078 bw ( KiB/s): min= 4096, max=65536, per=0.83%, avg=35359.40, stdev=18732.08, samples=15 00:25:10.078 iops : min= 4, max= 64, avg=34.47, stdev=18.35, samples=15 00:25:10.078 lat (msec) : 250=0.52%, 500=0.26%, 750=0.52%, 1000=0.26%, 2000=12.14% 00:25:10.078 lat (msec) : >=2000=86.30% 00:25:10.078 cpu : usr=0.00%, sys=1.14%, ctx=1006, majf=0, minf=32769 00:25:10.078 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.3%, >=64=83.7% 00:25:10.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.078 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:10.078 issued rwts: total=387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.078 job0: (groupid=0, jobs=1): err= 0: pid=424725: Tue Nov 19 01:09:14 2024 00:25:10.078 read: IOPS=50, BW=50.5MiB/s (53.0MB/s)(511MiB/10111msec) 00:25:10.078 slat (usec): min=46, max=233969, avg=19596.00, stdev=29969.89 00:25:10.078 clat (msec): min=94, max=3514, avg=2292.22, stdev=744.48 00:25:10.078 lat (msec): min=131, max=3525, avg=2311.81, stdev=743.20 00:25:10.078 clat percentiles (msec): 00:25:10.078 | 1.00th=[ 209], 5.00th=[ 498], 10.00th=[ 1334], 20.00th=[ 1871], 00:25:10.078 | 30.00th=[ 2072], 40.00th=[ 2265], 50.00th=[ 2400], 60.00th=[ 2567], 00:25:10.078 | 70.00th=[ 2702], 80.00th=[ 2836], 90.00th=[ 3104], 95.00th=[ 3406], 00:25:10.078 | 99.00th=[ 3440], 99.50th=[ 3473], 99.90th=[ 3507], 99.95th=[ 3507], 00:25:10.078 | 99.99th=[ 3507] 00:25:10.078 bw ( KiB/s): min=18432, max=92160, per=1.15%, avg=49027.31, stdev=20954.90, samples=16 00:25:10.078 iops : min= 18, max= 90, avg=47.87, stdev=20.47, samples=16 00:25:10.078 lat (msec) : 100=0.20%, 250=1.37%, 500=3.52%, 750=2.94%, 1000=0.78% 00:25:10.078 lat (msec) : 2000=18.00%, >=2000=73.19% 00:25:10.078 cpu : usr=0.01%, sys=1.27%, ctx=1010, majf=0, minf=32769 00:25:10.078 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.7% 00:25:10.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.078 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.078 issued rwts: total=511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.078 job0: (groupid=0, jobs=1): err= 0: pid=424726: Tue Nov 19 01:09:14 2024 00:25:10.078 read: IOPS=57, BW=57.8MiB/s (60.6MB/s)(582MiB/10067msec) 00:25:10.078 slat (usec): min=38, max=164827, avg=17257.17, stdev=32456.30 00:25:10.078 clat (msec): min=19, max=3273, avg=1815.17, stdev=621.59 00:25:10.078 lat (msec): min=83, max=3281, avg=1832.43, stdev=623.98 00:25:10.078 clat percentiles (msec): 00:25:10.078 | 1.00th=[ 97], 5.00th=[ 414], 10.00th=[ 927], 20.00th=[ 1586], 00:25:10.078 | 30.00th=[ 1670], 40.00th=[ 1720], 50.00th=[ 1804], 60.00th=[ 1871], 00:25:10.078 | 70.00th=[ 2165], 80.00th=[ 2299], 90.00th=[ 2433], 95.00th=[ 2836], 00:25:10.078 | 99.00th=[ 3205], 99.50th=[ 3272], 99.90th=[ 3272], 99.95th=[ 3272], 00:25:10.078 | 99.99th=[ 3272] 00:25:10.078 bw ( KiB/s): min=30720, max=92160, per=1.57%, avg=66638.77, stdev=21408.14, samples=13 00:25:10.078 iops : min= 30, max= 90, avg=65.08, stdev=20.91, samples=13 00:25:10.078 lat (msec) : 20=0.17%, 100=0.86%, 250=2.58%, 500=1.89%, 750=2.06% 00:25:10.078 lat (msec) : 1000=2.75%, 2000=54.47%, >=2000=35.22% 00:25:10.078 cpu : usr=0.02%, sys=1.22%, ctx=772, majf=0, minf=32769 00:25:10.078 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:25:10.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.078 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.078 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.078 job0: (groupid=0, jobs=1): err= 0: pid=424727: Tue Nov 19 01:09:14 2024 00:25:10.078 read: IOPS=62, BW=62.1MiB/s (65.1MB/s)(627MiB/10102msec) 00:25:10.078 slat (usec): min=29, max=202191, avg=15971.59, stdev=35541.07 00:25:10.078 clat (msec): min=85, max=2487, avg=1797.13, stdev=497.91 00:25:10.078 lat (msec): min=120, max=2509, avg=1813.11, stdev=497.67 00:25:10.078 clat percentiles (msec): 00:25:10.078 | 1.00th=[ 150], 5.00th=[ 531], 10.00th=[ 1234], 20.00th=[ 1418], 00:25:10.078 | 30.00th=[ 1636], 40.00th=[ 1854], 50.00th=[ 2005], 60.00th=[ 2022], 00:25:10.078 | 70.00th=[ 2072], 80.00th=[ 2198], 90.00th=[ 2265], 95.00th=[ 2333], 00:25:10.078 | 99.00th=[ 2400], 99.50th=[ 2467], 99.90th=[ 2500], 99.95th=[ 2500], 00:25:10.078 | 99.99th=[ 2500] 00:25:10.078 bw ( KiB/s): min=28672, max=129024, per=1.50%, avg=63872.00, stdev=28458.79, samples=16 00:25:10.078 iops : min= 28, max= 126, avg=62.38, stdev=27.79, samples=16 00:25:10.078 lat (msec) : 100=0.16%, 250=1.75%, 500=2.39%, 750=1.59%, 1000=1.44% 00:25:10.078 lat (msec) : 2000=44.82%, >=2000=47.85% 00:25:10.078 cpu : usr=0.03%, sys=1.17%, ctx=991, majf=0, minf=32769 00:25:10.078 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=90.0% 00:25:10.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.078 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.078 issued rwts: total=627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.078 job0: (groupid=0, jobs=1): err= 0: pid=424728: Tue Nov 19 01:09:14 2024 00:25:10.078 read: IOPS=51, BW=51.3MiB/s (53.8MB/s)(519MiB/10114msec) 00:25:10.078 slat (usec): min=34, max=221185, avg=19288.97, stdev=42573.72 00:25:10.078 clat (msec): min=100, max=2931, avg=2130.46, stdev=720.08 00:25:10.078 lat (msec): min=169, max=3059, avg=2149.75, stdev=721.23 00:25:10.078 clat percentiles (msec): 00:25:10.078 | 1.00th=[ 241], 5.00th=[ 388], 10.00th=[ 860], 20.00th=[ 1770], 00:25:10.078 | 30.00th=[ 2106], 40.00th=[ 2232], 50.00th=[ 2366], 60.00th=[ 2467], 00:25:10.078 | 70.00th=[ 2567], 80.00th=[ 2668], 90.00th=[ 2769], 95.00th=[ 2869], 00:25:10.078 | 99.00th=[ 2937], 99.50th=[ 2937], 99.90th=[ 2937], 99.95th=[ 2937], 00:25:10.078 | 99.99th=[ 2937] 00:25:10.078 bw ( KiB/s): min=18432, max=96256, per=1.26%, avg=53384.53, stdev=22864.17, samples=15 00:25:10.078 iops : min= 18, max= 94, avg=52.13, stdev=22.33, samples=15 00:25:10.078 lat (msec) : 250=2.70%, 500=3.28%, 750=3.85%, 1000=2.70%, 2000=10.21% 00:25:10.078 lat (msec) : >=2000=77.26% 00:25:10.078 cpu : usr=0.00%, sys=1.20%, ctx=738, majf=0, minf=32769 00:25:10.078 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.9% 00:25:10.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.078 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.078 issued rwts: total=519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.078 job0: (groupid=0, jobs=1): err= 0: pid=424729: Tue Nov 19 01:09:14 2024 00:25:10.078 read: IOPS=51, BW=51.6MiB/s (54.1MB/s)(522MiB/10113msec) 00:25:10.078 slat (usec): min=113, max=212069, avg=19160.09, stdev=33523.29 00:25:10.078 clat (msec): min=109, max=3004, avg=2229.04, stdev=694.35 00:25:10.078 lat (msec): min=113, max=3014, avg=2248.20, stdev=694.96 00:25:10.078 clat percentiles (msec): 00:25:10.078 | 1.00th=[ 199], 5.00th=[ 510], 10.00th=[ 919], 20.00th=[ 2022], 00:25:10.078 | 30.00th=[ 2165], 40.00th=[ 2366], 50.00th=[ 2433], 60.00th=[ 2500], 00:25:10.078 | 70.00th=[ 2635], 80.00th=[ 2769], 90.00th=[ 2836], 95.00th=[ 2903], 00:25:10.078 | 99.00th=[ 2937], 99.50th=[ 2970], 99.90th=[ 3004], 99.95th=[ 3004], 00:25:10.078 | 99.99th=[ 3004] 00:25:10.078 bw ( KiB/s): min=22483, max=67584, per=1.19%, avg=50553.56, stdev=15111.27, samples=16 00:25:10.078 iops : min= 21, max= 66, avg=49.25, stdev=14.97, samples=16 00:25:10.078 lat (msec) : 250=1.72%, 500=3.26%, 750=2.87%, 1000=2.87%, 2000=8.43% 00:25:10.078 lat (msec) : >=2000=80.84% 00:25:10.078 cpu : usr=0.00%, sys=1.20%, ctx=1047, majf=0, minf=32769 00:25:10.078 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=87.9% 00:25:10.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.078 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.078 issued rwts: total=522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job0: (groupid=0, jobs=1): err= 0: pid=424730: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=43, BW=43.7MiB/s (45.8MB/s)(441MiB/10087msec) 00:25:10.079 slat (usec): min=37, max=245470, avg=22673.73, stdev=48353.11 00:25:10.079 clat (msec): min=85, max=4587, avg=2321.46, stdev=1462.98 00:25:10.079 lat (msec): min=86, max=4590, avg=2344.13, stdev=1471.90 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 91], 5.00th=[ 305], 10.00th=[ 584], 20.00th=[ 1116], 00:25:10.079 | 30.00th=[ 1250], 40.00th=[ 1401], 50.00th=[ 1502], 60.00th=[ 2500], 00:25:10.079 | 70.00th=[ 3842], 80.00th=[ 4178], 90.00th=[ 4329], 95.00th=[ 4329], 00:25:10.079 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4597], 99.95th=[ 4597], 00:25:10.079 | 99.99th=[ 4597] 00:25:10.079 bw ( KiB/s): min=18432, max=126976, per=1.26%, avg=53589.33, stdev=39334.33, samples=12 00:25:10.079 iops : min= 18, max= 124, avg=52.33, stdev=38.41, samples=12 00:25:10.079 lat (msec) : 100=1.36%, 250=2.49%, 500=5.22%, 750=2.72%, 1000=2.72% 00:25:10.079 lat (msec) : 2000=40.59%, >=2000=44.90% 00:25:10.079 cpu : usr=0.00%, sys=1.01%, ctx=868, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.3%, >=64=85.7% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.079 issued rwts: total=441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job0: (groupid=0, jobs=1): err= 0: pid=424731: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=36, BW=36.7MiB/s (38.5MB/s)(371MiB/10112msec) 00:25:10.079 slat (usec): min=334, max=210266, avg=26955.25, stdev=49243.65 00:25:10.079 clat (msec): min=109, max=4445, avg=3028.22, stdev=1237.52 00:25:10.079 lat (msec): min=113, max=4446, avg=3055.18, stdev=1239.93 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 118], 5.00th=[ 510], 10.00th=[ 793], 20.00th=[ 1687], 00:25:10.079 | 30.00th=[ 2769], 40.00th=[ 3373], 50.00th=[ 3708], 60.00th=[ 3775], 00:25:10.079 | 70.00th=[ 3876], 80.00th=[ 3910], 90.00th=[ 4044], 95.00th=[ 4212], 00:25:10.079 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4463], 99.95th=[ 4463], 00:25:10.079 | 99.99th=[ 4463] 00:25:10.079 bw ( KiB/s): min=12288, max=63488, per=0.84%, avg=35689.29, stdev=13627.30, samples=14 00:25:10.079 iops : min= 12, max= 62, avg=34.79, stdev=13.34, samples=14 00:25:10.079 lat (msec) : 250=2.16%, 500=2.16%, 750=4.58%, 1000=5.39%, 2000=8.63% 00:25:10.079 lat (msec) : >=2000=77.09% 00:25:10.079 cpu : usr=0.04%, sys=1.15%, ctx=834, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.0% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:10.079 issued rwts: total=371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job0: (groupid=0, jobs=1): err= 0: pid=424732: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=43, BW=43.7MiB/s (45.8MB/s)(442MiB/10122msec) 00:25:10.079 slat (usec): min=25, max=220776, avg=22799.31, stdev=48932.28 00:25:10.079 clat (msec): min=42, max=4770, avg=2208.78, stdev=1276.61 00:25:10.079 lat (msec): min=168, max=4897, avg=2231.58, stdev=1286.65 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 174], 5.00th=[ 418], 10.00th=[ 625], 20.00th=[ 1028], 00:25:10.079 | 30.00th=[ 1485], 40.00th=[ 1603], 50.00th=[ 1770], 60.00th=[ 2433], 00:25:10.079 | 70.00th=[ 3205], 80.00th=[ 3608], 90.00th=[ 3977], 95.00th=[ 4463], 00:25:10.079 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4799], 99.95th=[ 4799], 00:25:10.079 | 99.99th=[ 4799] 00:25:10.079 bw ( KiB/s): min= 2048, max=114688, per=1.26%, avg=53589.33, stdev=33797.17, samples=12 00:25:10.079 iops : min= 2, max= 112, avg=52.33, stdev=33.01, samples=12 00:25:10.079 lat (msec) : 50=0.23%, 250=2.71%, 500=4.75%, 750=7.01%, 1000=3.62% 00:25:10.079 lat (msec) : 2000=36.20%, >=2000=45.48% 00:25:10.079 cpu : usr=0.03%, sys=0.94%, ctx=852, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.7% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.079 issued rwts: total=442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job0: (groupid=0, jobs=1): err= 0: pid=424733: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=41, BW=41.6MiB/s (43.7MB/s)(420MiB/10088msec) 00:25:10.079 slat (usec): min=40, max=199197, avg=23831.51, stdev=42562.11 00:25:10.079 clat (msec): min=76, max=4270, avg=2372.57, stdev=975.38 00:25:10.079 lat (msec): min=104, max=4317, avg=2396.40, stdev=978.39 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 130], 5.00th=[ 405], 10.00th=[ 1368], 20.00th=[ 1536], 00:25:10.079 | 30.00th=[ 1905], 40.00th=[ 2106], 50.00th=[ 2400], 60.00th=[ 2635], 00:25:10.079 | 70.00th=[ 2769], 80.00th=[ 3339], 90.00th=[ 3742], 95.00th=[ 4010], 00:25:10.079 | 99.00th=[ 4178], 99.50th=[ 4279], 99.90th=[ 4279], 99.95th=[ 4279], 00:25:10.079 | 99.99th=[ 4279] 00:25:10.079 bw ( KiB/s): min=18432, max=120832, per=1.18%, avg=49998.83, stdev=30907.66, samples=12 00:25:10.079 iops : min= 18, max= 118, avg=48.75, stdev=30.21, samples=12 00:25:10.079 lat (msec) : 100=0.24%, 250=3.10%, 500=2.14%, 750=1.19%, 1000=0.71% 00:25:10.079 lat (msec) : 2000=29.05%, >=2000=63.57% 00:25:10.079 cpu : usr=0.06%, sys=0.90%, ctx=938, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.079 issued rwts: total=420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job0: (groupid=0, jobs=1): err= 0: pid=424734: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=94, BW=94.3MiB/s (98.8MB/s)(951MiB/10088msec) 00:25:10.079 slat (usec): min=30, max=197891, avg=10576.12, stdev=33560.81 00:25:10.079 clat (msec): min=27, max=1730, avg=1261.40, stdev=248.72 00:25:10.079 lat (msec): min=153, max=1732, avg=1271.97, stdev=247.73 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 330], 5.00th=[ 743], 10.00th=[ 1036], 20.00th=[ 1150], 00:25:10.079 | 30.00th=[ 1200], 40.00th=[ 1267], 50.00th=[ 1301], 60.00th=[ 1351], 00:25:10.079 | 70.00th=[ 1385], 80.00th=[ 1452], 90.00th=[ 1485], 95.00th=[ 1536], 00:25:10.079 | 99.00th=[ 1636], 99.50th=[ 1720], 99.90th=[ 1737], 99.95th=[ 1737], 00:25:10.079 | 99.99th=[ 1737] 00:25:10.079 bw ( KiB/s): min=43008, max=124928, per=2.20%, avg=93611.17, stdev=21386.73, samples=18 00:25:10.079 iops : min= 42, max= 122, avg=91.33, stdev=20.92, samples=18 00:25:10.079 lat (msec) : 50=0.11%, 250=0.42%, 500=1.58%, 750=3.47%, 1000=2.21% 00:25:10.079 lat (msec) : 2000=92.22% 00:25:10.079 cpu : usr=0.04%, sys=1.37%, ctx=891, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:10.079 issued rwts: total=951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job0: (groupid=0, jobs=1): err= 0: pid=424735: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=61, BW=61.8MiB/s (64.8MB/s)(627MiB/10138msec) 00:25:10.079 slat (usec): min=56, max=100255, avg=15978.18, stdev=17472.95 00:25:10.079 clat (msec): min=115, max=2668, avg=1858.89, stdev=483.53 00:25:10.079 lat (msec): min=180, max=2684, avg=1874.87, stdev=481.73 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 330], 5.00th=[ 1053], 10.00th=[ 1368], 20.00th=[ 1485], 00:25:10.079 | 30.00th=[ 1620], 40.00th=[ 1754], 50.00th=[ 1888], 60.00th=[ 1972], 00:25:10.079 | 70.00th=[ 2106], 80.00th=[ 2333], 90.00th=[ 2534], 95.00th=[ 2601], 00:25:10.079 | 99.00th=[ 2635], 99.50th=[ 2635], 99.90th=[ 2668], 99.95th=[ 2668], 00:25:10.079 | 99.99th=[ 2668] 00:25:10.079 bw ( KiB/s): min=22528, max=129024, per=1.41%, avg=60108.76, stdev=27885.55, samples=17 00:25:10.079 iops : min= 22, max= 126, avg=58.65, stdev=27.25, samples=17 00:25:10.079 lat (msec) : 250=0.80%, 500=1.44%, 750=0.96%, 1000=1.12%, 2000=57.58% 00:25:10.079 lat (msec) : >=2000=38.12% 00:25:10.079 cpu : usr=0.02%, sys=1.57%, ctx=1159, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=90.0% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.079 issued rwts: total=627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job0: (groupid=0, jobs=1): err= 0: pid=424736: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=46, BW=46.7MiB/s (48.9MB/s)(472MiB/10116msec) 00:25:10.079 slat (usec): min=49, max=174439, avg=21181.58, stdev=29892.54 00:25:10.079 clat (msec): min=115, max=3635, avg=2334.36, stdev=865.17 00:25:10.079 lat (msec): min=169, max=3644, avg=2355.54, stdev=866.92 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 178], 5.00th=[ 326], 10.00th=[ 743], 20.00th=[ 1871], 00:25:10.079 | 30.00th=[ 2265], 40.00th=[ 2366], 50.00th=[ 2534], 60.00th=[ 2702], 00:25:10.079 | 70.00th=[ 2836], 80.00th=[ 2970], 90.00th=[ 3205], 95.00th=[ 3440], 00:25:10.079 | 99.00th=[ 3574], 99.50th=[ 3608], 99.90th=[ 3641], 99.95th=[ 3641], 00:25:10.079 | 99.99th=[ 3641] 00:25:10.079 bw ( KiB/s): min=22528, max=79872, per=1.19%, avg=50468.57, stdev=14561.69, samples=14 00:25:10.079 iops : min= 22, max= 78, avg=49.29, stdev=14.22, samples=14 00:25:10.079 lat (msec) : 250=4.66%, 500=2.33%, 750=3.18%, 1000=2.33%, 2000=8.47% 00:25:10.079 lat (msec) : >=2000=79.03% 00:25:10.079 cpu : usr=0.02%, sys=1.24%, ctx=965, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.079 issued rwts: total=472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job1: (groupid=0, jobs=1): err= 0: pid=424737: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=77, BW=77.2MiB/s (81.0MB/s)(786MiB/10181msec) 00:25:10.079 slat (usec): min=31, max=187862, avg=12803.85, stdev=34988.86 00:25:10.079 clat (msec): min=113, max=2295, avg=1509.26, stdev=380.98 00:25:10.079 lat (msec): min=240, max=2319, avg=1522.07, stdev=381.92 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 284], 5.00th=[ 693], 10.00th=[ 1083], 20.00th=[ 1334], 00:25:10.079 | 30.00th=[ 1452], 40.00th=[ 1469], 50.00th=[ 1485], 60.00th=[ 1519], 00:25:10.079 | 70.00th=[ 1636], 80.00th=[ 1737], 90.00th=[ 2056], 95.00th=[ 2165], 00:25:10.079 | 99.00th=[ 2232], 99.50th=[ 2232], 99.90th=[ 2299], 99.95th=[ 2299], 00:25:10.079 | 99.99th=[ 2299] 00:25:10.079 bw ( KiB/s): min=22573, max=124928, per=1.86%, avg=79272.29, stdev=26168.72, samples=17 00:25:10.079 iops : min= 22, max= 122, avg=77.41, stdev=25.56, samples=17 00:25:10.079 lat (msec) : 250=0.25%, 500=1.02%, 750=3.82%, 1000=3.82%, 2000=76.72% 00:25:10.079 lat (msec) : >=2000=14.38% 00:25:10.079 cpu : usr=0.02%, sys=1.38%, ctx=827, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.079 issued rwts: total=786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job1: (groupid=0, jobs=1): err= 0: pid=424738: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=48, BW=48.4MiB/s (50.7MB/s)(491MiB/10148msec) 00:25:10.079 slat (usec): min=28, max=180867, avg=20363.69, stdev=39583.99 00:25:10.079 clat (msec): min=147, max=3934, avg=2305.73, stdev=1082.20 00:25:10.079 lat (msec): min=147, max=3938, avg=2326.10, stdev=1087.39 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 150], 5.00th=[ 300], 10.00th=[ 592], 20.00th=[ 1116], 00:25:10.079 | 30.00th=[ 1989], 40.00th=[ 2106], 50.00th=[ 2299], 60.00th=[ 2702], 00:25:10.079 | 70.00th=[ 3037], 80.00th=[ 3440], 90.00th=[ 3675], 95.00th=[ 3742], 00:25:10.079 | 99.00th=[ 3842], 99.50th=[ 3910], 99.90th=[ 3943], 99.95th=[ 3943], 00:25:10.079 | 99.99th=[ 3943] 00:25:10.079 bw ( KiB/s): min=14336, max=114688, per=1.25%, avg=53248.00, stdev=30419.20, samples=14 00:25:10.079 iops : min= 14, max= 112, avg=52.00, stdev=29.71, samples=14 00:25:10.079 lat (msec) : 250=2.04%, 500=6.31%, 750=3.87%, 1000=5.70%, 2000=12.83% 00:25:10.079 lat (msec) : >=2000=69.25% 00:25:10.079 cpu : usr=0.00%, sys=1.29%, ctx=873, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.5%, >=64=87.2% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.079 issued rwts: total=491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job1: (groupid=0, jobs=1): err= 0: pid=424739: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=53, BW=53.9MiB/s (56.5MB/s)(545MiB/10113msec) 00:25:10.079 slat (usec): min=40, max=284149, avg=18349.26, stdev=40107.49 00:25:10.079 clat (msec): min=109, max=2728, avg=1950.01, stdev=518.96 00:25:10.079 lat (msec): min=114, max=2822, avg=1968.36, stdev=519.14 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 203], 5.00th=[ 625], 10.00th=[ 1183], 20.00th=[ 1787], 00:25:10.079 | 30.00th=[ 1905], 40.00th=[ 2022], 50.00th=[ 2089], 60.00th=[ 2165], 00:25:10.079 | 70.00th=[ 2232], 80.00th=[ 2265], 90.00th=[ 2400], 95.00th=[ 2534], 00:25:10.079 | 99.00th=[ 2668], 99.50th=[ 2702], 99.90th=[ 2735], 99.95th=[ 2735], 00:25:10.079 | 99.99th=[ 2735] 00:25:10.079 bw ( KiB/s): min=36864, max=88064, per=1.34%, avg=57070.93, stdev=16253.04, samples=15 00:25:10.079 iops : min= 36, max= 86, avg=55.73, stdev=15.87, samples=15 00:25:10.079 lat (msec) : 250=1.83%, 500=2.20%, 750=1.65%, 1000=2.02%, 2000=29.54% 00:25:10.079 lat (msec) : >=2000=62.75% 00:25:10.079 cpu : usr=0.03%, sys=1.10%, ctx=1125, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.4% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.079 issued rwts: total=545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job1: (groupid=0, jobs=1): err= 0: pid=424740: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=55, BW=55.1MiB/s (57.7MB/s)(558MiB/10134msec) 00:25:10.079 slat (usec): min=36, max=198515, avg=17948.30, stdev=44036.12 00:25:10.079 clat (msec): min=116, max=3949, avg=1942.48, stdev=1000.72 00:25:10.079 lat (msec): min=134, max=4054, avg=1960.43, stdev=1007.07 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 140], 5.00th=[ 414], 10.00th=[ 735], 20.00th=[ 1301], 00:25:10.079 | 30.00th=[ 1435], 40.00th=[ 1469], 50.00th=[ 1636], 60.00th=[ 1905], 00:25:10.079 | 70.00th=[ 2165], 80.00th=[ 3171], 90.00th=[ 3507], 95.00th=[ 3708], 00:25:10.079 | 99.00th=[ 3910], 99.50th=[ 3943], 99.90th=[ 3943], 99.95th=[ 3943], 00:25:10.079 | 99.99th=[ 3943] 00:25:10.079 bw ( KiB/s): min= 4096, max=104448, per=1.48%, avg=62892.57, stdev=30431.18, samples=14 00:25:10.079 iops : min= 4, max= 102, avg=61.36, stdev=29.70, samples=14 00:25:10.079 lat (msec) : 250=1.25%, 500=5.38%, 750=4.48%, 1000=3.94%, 2000=52.51% 00:25:10.079 lat (msec) : >=2000=32.44% 00:25:10.079 cpu : usr=0.02%, sys=1.01%, ctx=860, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.079 issued rwts: total=558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job1: (groupid=0, jobs=1): err= 0: pid=424741: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=48, BW=48.3MiB/s (50.7MB/s)(491MiB/10157msec) 00:25:10.079 slat (usec): min=38, max=236259, avg=20399.50, stdev=47506.46 00:25:10.079 clat (msec): min=137, max=4999, avg=2333.31, stdev=1307.16 00:25:10.079 lat (msec): min=219, max=5078, avg=2353.71, stdev=1312.54 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 296], 5.00th=[ 477], 10.00th=[ 802], 20.00th=[ 1334], 00:25:10.079 | 30.00th=[ 1586], 40.00th=[ 1854], 50.00th=[ 1888], 60.00th=[ 2022], 00:25:10.079 | 70.00th=[ 3104], 80.00th=[ 3742], 90.00th=[ 4463], 95.00th=[ 4665], 00:25:10.079 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:25:10.079 | 99.99th=[ 5000] 00:25:10.079 bw ( KiB/s): min=18432, max=96448, per=1.25%, avg=53112.14, stdev=31270.42, samples=14 00:25:10.079 iops : min= 18, max= 94, avg=51.64, stdev=30.72, samples=14 00:25:10.079 lat (msec) : 250=0.41%, 500=6.11%, 750=3.05%, 1000=6.11%, 2000=44.20% 00:25:10.079 lat (msec) : >=2000=40.12% 00:25:10.079 cpu : usr=0.04%, sys=1.02%, ctx=804, majf=0, minf=32769 00:25:10.079 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.5%, >=64=87.2% 00:25:10.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.079 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.079 issued rwts: total=491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.079 job1: (groupid=0, jobs=1): err= 0: pid=424742: Tue Nov 19 01:09:14 2024 00:25:10.079 read: IOPS=45, BW=45.1MiB/s (47.3MB/s)(456MiB/10113msec) 00:25:10.079 slat (usec): min=33, max=205615, avg=21936.52, stdev=37125.75 00:25:10.079 clat (msec): min=107, max=4455, avg=2438.93, stdev=1086.85 00:25:10.079 lat (msec): min=115, max=4458, avg=2460.87, stdev=1090.27 00:25:10.079 clat percentiles (msec): 00:25:10.079 | 1.00th=[ 247], 5.00th=[ 451], 10.00th=[ 768], 20.00th=[ 1284], 00:25:10.079 | 30.00th=[ 2299], 40.00th=[ 2433], 50.00th=[ 2567], 60.00th=[ 2702], 00:25:10.079 | 70.00th=[ 2869], 80.00th=[ 3373], 90.00th=[ 3842], 95.00th=[ 4212], 00:25:10.079 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:25:10.080 | 99.99th=[ 4463] 00:25:10.080 bw ( KiB/s): min= 2048, max=81920, per=0.99%, avg=42115.31, stdev=24505.05, samples=16 00:25:10.080 iops : min= 2, max= 80, avg=41.12, stdev=23.93, samples=16 00:25:10.080 lat (msec) : 250=1.10%, 500=4.61%, 750=3.51%, 1000=6.58%, 2000=10.96% 00:25:10.080 lat (msec) : >=2000=73.25% 00:25:10.080 cpu : usr=0.03%, sys=1.09%, ctx=1078, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.0%, >=64=86.2% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.080 issued rwts: total=456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job1: (groupid=0, jobs=1): err= 0: pid=424743: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=64, BW=64.6MiB/s (67.7MB/s)(653MiB/10115msec) 00:25:10.080 slat (usec): min=51, max=204368, avg=15318.20, stdev=40494.63 00:25:10.080 clat (msec): min=109, max=3088, avg=1650.17, stdev=505.97 00:25:10.080 lat (msec): min=118, max=3094, avg=1665.49, stdev=508.61 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 220], 5.00th=[ 609], 10.00th=[ 1183], 20.00th=[ 1418], 00:25:10.080 | 30.00th=[ 1452], 40.00th=[ 1569], 50.00th=[ 1603], 60.00th=[ 1720], 00:25:10.080 | 70.00th=[ 1871], 80.00th=[ 1989], 90.00th=[ 2106], 95.00th=[ 2567], 00:25:10.080 | 99.00th=[ 3071], 99.50th=[ 3071], 99.90th=[ 3104], 99.95th=[ 3104], 00:25:10.080 | 99.99th=[ 3104] 00:25:10.080 bw ( KiB/s): min=32768, max=132854, per=1.81%, avg=76933.93, stdev=22974.32, samples=14 00:25:10.080 iops : min= 32, max= 129, avg=74.93, stdev=22.35, samples=14 00:25:10.080 lat (msec) : 250=1.53%, 500=2.91%, 750=2.45%, 1000=1.84%, 2000=73.35% 00:25:10.080 lat (msec) : >=2000=17.92% 00:25:10.080 cpu : usr=0.04%, sys=1.23%, ctx=761, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.4% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.080 issued rwts: total=653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job1: (groupid=0, jobs=1): err= 0: pid=424744: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=47, BW=47.5MiB/s (49.8MB/s)(483MiB/10171msec) 00:25:10.080 slat (usec): min=57, max=205436, avg=20715.34, stdev=35362.42 00:25:10.080 clat (msec): min=162, max=3509, avg=2407.47, stdev=679.57 00:25:10.080 lat (msec): min=175, max=3627, avg=2428.19, stdev=677.93 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 292], 5.00th=[ 953], 10.00th=[ 1469], 20.00th=[ 2089], 00:25:10.080 | 30.00th=[ 2165], 40.00th=[ 2232], 50.00th=[ 2433], 60.00th=[ 2567], 00:25:10.080 | 70.00th=[ 2869], 80.00th=[ 3037], 90.00th=[ 3205], 95.00th=[ 3306], 00:25:10.080 | 99.00th=[ 3440], 99.50th=[ 3440], 99.90th=[ 3507], 99.95th=[ 3507], 00:25:10.080 | 99.99th=[ 3507] 00:25:10.080 bw ( KiB/s): min=26624, max=83968, per=1.07%, avg=45440.69, stdev=17057.55, samples=16 00:25:10.080 iops : min= 26, max= 82, avg=44.31, stdev=16.72, samples=16 00:25:10.080 lat (msec) : 250=0.62%, 500=1.45%, 750=1.66%, 1000=2.28%, 2000=9.73% 00:25:10.080 lat (msec) : >=2000=84.27% 00:25:10.080 cpu : usr=0.01%, sys=1.40%, ctx=880, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.6%, >=64=87.0% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.080 issued rwts: total=483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job1: (groupid=0, jobs=1): err= 0: pid=424745: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=67, BW=67.3MiB/s (70.6MB/s)(685MiB/10179msec) 00:25:10.080 slat (usec): min=37, max=201993, avg=14627.38, stdev=35844.63 00:25:10.080 clat (msec): min=156, max=3359, avg=1656.94, stdev=775.55 00:25:10.080 lat (msec): min=184, max=3359, avg=1671.57, stdev=777.08 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 305], 5.00th=[ 860], 10.00th=[ 877], 20.00th=[ 969], 00:25:10.080 | 30.00th=[ 1083], 40.00th=[ 1234], 50.00th=[ 1385], 60.00th=[ 1603], 00:25:10.080 | 70.00th=[ 2056], 80.00th=[ 2500], 90.00th=[ 2903], 95.00th=[ 3171], 00:25:10.080 | 99.00th=[ 3339], 99.50th=[ 3373], 99.90th=[ 3373], 99.95th=[ 3373], 00:25:10.080 | 99.99th=[ 3373] 00:25:10.080 bw ( KiB/s): min=18432, max=147456, per=1.79%, avg=76013.53, stdev=47088.28, samples=15 00:25:10.080 iops : min= 18, max= 144, avg=74.13, stdev=45.86, samples=15 00:25:10.080 lat (msec) : 250=0.73%, 500=1.02%, 750=0.88%, 1000=19.42%, 2000=46.57% 00:25:10.080 lat (msec) : >=2000=31.39% 00:25:10.080 cpu : usr=0.00%, sys=1.18%, ctx=909, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.080 issued rwts: total=685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job1: (groupid=0, jobs=1): err= 0: pid=424746: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=29, BW=29.7MiB/s (31.2MB/s)(299MiB/10058msec) 00:25:10.080 slat (usec): min=453, max=239050, avg=33448.11, stdev=53700.42 00:25:10.080 clat (msec): min=55, max=5509, avg=3392.08, stdev=1510.48 00:25:10.080 lat (msec): min=113, max=5512, avg=3425.53, stdev=1512.06 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 159], 5.00th=[ 527], 10.00th=[ 844], 20.00th=[ 1972], 00:25:10.080 | 30.00th=[ 2802], 40.00th=[ 3406], 50.00th=[ 3641], 60.00th=[ 3842], 00:25:10.080 | 70.00th=[ 4732], 80.00th=[ 5000], 90.00th=[ 5134], 95.00th=[ 5269], 00:25:10.080 | 99.00th=[ 5470], 99.50th=[ 5470], 99.90th=[ 5537], 99.95th=[ 5537], 00:25:10.080 | 99.99th=[ 5537] 00:25:10.080 bw ( KiB/s): min= 6144, max=61440, per=0.75%, avg=32005.45, stdev=13121.34, samples=11 00:25:10.080 iops : min= 6, max= 60, avg=31.18, stdev=12.81, samples=11 00:25:10.080 lat (msec) : 100=0.33%, 250=1.67%, 500=2.34%, 750=4.68%, 1000=2.68% 00:25:10.080 lat (msec) : 2000=9.03%, >=2000=79.26% 00:25:10.080 cpu : usr=0.00%, sys=0.95%, ctx=902, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.4%, 32=10.7%, >=64=78.9% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:25:10.080 issued rwts: total=299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job1: (groupid=0, jobs=1): err= 0: pid=424747: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=38, BW=38.5MiB/s (40.4MB/s)(387MiB/10055msec) 00:25:10.080 slat (usec): min=54, max=199387, avg=25839.30, stdev=46906.78 00:25:10.080 clat (msec): min=53, max=4644, avg=2547.67, stdev=1263.41 00:25:10.080 lat (msec): min=54, max=4733, avg=2573.51, stdev=1270.16 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 58], 5.00th=[ 393], 10.00th=[ 776], 20.00th=[ 1620], 00:25:10.080 | 30.00th=[ 1972], 40.00th=[ 2198], 50.00th=[ 2400], 60.00th=[ 2601], 00:25:10.080 | 70.00th=[ 3138], 80.00th=[ 3876], 90.00th=[ 4463], 95.00th=[ 4597], 00:25:10.080 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:25:10.080 | 99.99th=[ 4665] 00:25:10.080 bw ( KiB/s): min=26624, max=69632, per=1.11%, avg=47104.00, stdev=16355.53, samples=10 00:25:10.080 iops : min= 26, max= 68, avg=46.00, stdev=15.97, samples=10 00:25:10.080 lat (msec) : 100=1.55%, 250=1.81%, 500=3.88%, 750=2.58%, 1000=3.10% 00:25:10.080 lat (msec) : 2000=18.60%, >=2000=68.48% 00:25:10.080 cpu : usr=0.01%, sys=0.88%, ctx=968, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.3%, >=64=83.7% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:10.080 issued rwts: total=387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job1: (groupid=0, jobs=1): err= 0: pid=424748: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=55, BW=55.0MiB/s (57.7MB/s)(557MiB/10120msec) 00:25:10.080 slat (usec): min=49, max=191724, avg=17979.20, stdev=32023.70 00:25:10.080 clat (msec): min=102, max=4323, avg=2205.98, stdev=1064.35 00:25:10.080 lat (msec): min=179, max=4391, avg=2223.95, stdev=1067.60 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 275], 5.00th=[ 1053], 10.00th=[ 1183], 20.00th=[ 1351], 00:25:10.080 | 30.00th=[ 1469], 40.00th=[ 1636], 50.00th=[ 1787], 60.00th=[ 2056], 00:25:10.080 | 70.00th=[ 2769], 80.00th=[ 3507], 90.00th=[ 3943], 95.00th=[ 4077], 00:25:10.080 | 99.00th=[ 4279], 99.50th=[ 4329], 99.90th=[ 4329], 99.95th=[ 4329], 00:25:10.080 | 99.99th=[ 4329] 00:25:10.080 bw ( KiB/s): min=14336, max=139264, per=1.15%, avg=48814.72, stdev=31921.63, samples=18 00:25:10.080 iops : min= 14, max= 136, avg=47.67, stdev=31.18, samples=18 00:25:10.080 lat (msec) : 250=0.90%, 500=1.26%, 750=0.54%, 1000=1.26%, 2000=53.86% 00:25:10.080 lat (msec) : >=2000=42.19% 00:25:10.080 cpu : usr=0.03%, sys=1.25%, ctx=1028, majf=0, minf=32770 00:25:10.080 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.080 issued rwts: total=557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job1: (groupid=0, jobs=1): err= 0: pid=424749: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=37, BW=37.9MiB/s (39.7MB/s)(381MiB/10064msec) 00:25:10.080 slat (usec): min=69, max=338437, avg=26256.81, stdev=50200.10 00:25:10.080 clat (msec): min=57, max=5281, avg=3134.94, stdev=1455.14 00:25:10.080 lat (msec): min=65, max=5328, avg=3161.19, stdev=1458.93 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 129], 5.00th=[ 558], 10.00th=[ 1020], 20.00th=[ 1636], 00:25:10.080 | 30.00th=[ 2022], 40.00th=[ 3004], 50.00th=[ 3574], 60.00th=[ 3910], 00:25:10.080 | 70.00th=[ 4010], 80.00th=[ 4463], 90.00th=[ 4933], 95.00th=[ 5067], 00:25:10.080 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:25:10.080 | 99.99th=[ 5269] 00:25:10.080 bw ( KiB/s): min=10240, max=49152, per=0.68%, avg=28899.56, stdev=12681.05, samples=18 00:25:10.080 iops : min= 10, max= 48, avg=28.22, stdev=12.38, samples=18 00:25:10.080 lat (msec) : 100=0.79%, 250=1.57%, 500=1.84%, 750=2.62%, 1000=3.15% 00:25:10.080 lat (msec) : 2000=19.16%, >=2000=70.87% 00:25:10.080 cpu : usr=0.00%, sys=1.17%, ctx=988, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.5% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:10.080 issued rwts: total=381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job2: (groupid=0, jobs=1): err= 0: pid=424750: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=36, BW=36.5MiB/s (38.3MB/s)(370MiB/10135msec) 00:25:10.080 slat (usec): min=50, max=307558, avg=27133.25, stdev=48075.04 00:25:10.080 clat (msec): min=93, max=4197, avg=2689.91, stdev=694.91 00:25:10.080 lat (msec): min=177, max=4204, avg=2717.04, stdev=691.36 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 351], 5.00th=[ 1636], 10.00th=[ 1938], 20.00th=[ 2265], 00:25:10.080 | 30.00th=[ 2433], 40.00th=[ 2500], 50.00th=[ 2601], 60.00th=[ 2735], 00:25:10.080 | 70.00th=[ 2903], 80.00th=[ 3306], 90.00th=[ 3742], 95.00th=[ 3876], 00:25:10.080 | 99.00th=[ 4044], 99.50th=[ 4178], 99.90th=[ 4212], 99.95th=[ 4212], 00:25:10.080 | 99.99th=[ 4212] 00:25:10.080 bw ( KiB/s): min= 6144, max=75776, per=0.97%, avg=41301.33, stdev=24417.75, samples=12 00:25:10.080 iops : min= 6, max= 74, avg=40.33, stdev=23.85, samples=12 00:25:10.080 lat (msec) : 100=0.27%, 250=0.27%, 500=0.54%, 750=0.54%, 1000=0.27% 00:25:10.080 lat (msec) : 2000=8.38%, >=2000=89.73% 00:25:10.080 cpu : usr=0.02%, sys=1.02%, ctx=824, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.0% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:10.080 issued rwts: total=370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job2: (groupid=0, jobs=1): err= 0: pid=424751: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=53, BW=53.1MiB/s (55.7MB/s)(538MiB/10129msec) 00:25:10.080 slat (usec): min=32, max=190003, avg=18648.42, stdev=38803.54 00:25:10.080 clat (msec): min=93, max=3134, avg=2006.60, stdev=659.07 00:25:10.080 lat (msec): min=151, max=3134, avg=2025.25, stdev=659.71 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 169], 5.00th=[ 676], 10.00th=[ 1385], 20.00th=[ 1452], 00:25:10.080 | 30.00th=[ 1552], 40.00th=[ 1838], 50.00th=[ 2072], 60.00th=[ 2366], 00:25:10.080 | 70.00th=[ 2500], 80.00th=[ 2601], 90.00th=[ 2769], 95.00th=[ 2836], 00:25:10.080 | 99.00th=[ 3104], 99.50th=[ 3138], 99.90th=[ 3138], 99.95th=[ 3138], 00:25:10.080 | 99.99th=[ 3138] 00:25:10.080 bw ( KiB/s): min=10219, max=94208, per=1.23%, avg=52478.69, stdev=25083.74, samples=16 00:25:10.080 iops : min= 9, max= 92, avg=51.19, stdev=24.61, samples=16 00:25:10.080 lat (msec) : 100=0.19%, 250=1.30%, 500=1.49%, 750=2.60%, 1000=1.49% 00:25:10.080 lat (msec) : 2000=39.41%, >=2000=53.53% 00:25:10.080 cpu : usr=0.01%, sys=1.17%, ctx=792, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.080 issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job2: (groupid=0, jobs=1): err= 0: pid=424752: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=50, BW=50.2MiB/s (52.6MB/s)(510MiB/10162msec) 00:25:10.080 slat (usec): min=38, max=184422, avg=19693.92, stdev=25000.13 00:25:10.080 clat (msec): min=114, max=3377, avg=2384.56, stdev=697.38 00:25:10.080 lat (msec): min=187, max=3393, avg=2404.26, stdev=696.03 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 239], 5.00th=[ 877], 10.00th=[ 1351], 20.00th=[ 2005], 00:25:10.080 | 30.00th=[ 2198], 40.00th=[ 2333], 50.00th=[ 2500], 60.00th=[ 2735], 00:25:10.080 | 70.00th=[ 2836], 80.00th=[ 2970], 90.00th=[ 3104], 95.00th=[ 3171], 00:25:10.080 | 99.00th=[ 3339], 99.50th=[ 3373], 99.90th=[ 3373], 99.95th=[ 3373], 00:25:10.080 | 99.99th=[ 3373] 00:25:10.080 bw ( KiB/s): min= 4096, max=86016, per=1.02%, avg=43463.11, stdev=18499.55, samples=18 00:25:10.080 iops : min= 4, max= 84, avg=42.44, stdev=18.07, samples=18 00:25:10.080 lat (msec) : 250=1.18%, 500=1.76%, 750=1.37%, 1000=2.35%, 2000=12.94% 00:25:10.080 lat (msec) : >=2000=80.39% 00:25:10.080 cpu : usr=0.03%, sys=1.38%, ctx=958, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.080 issued rwts: total=510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job2: (groupid=0, jobs=1): err= 0: pid=424753: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=53, BW=53.3MiB/s (55.9MB/s)(541MiB/10143msec) 00:25:10.080 slat (usec): min=52, max=204220, avg=18545.83, stdev=38038.31 00:25:10.080 clat (msec): min=107, max=3666, avg=2198.80, stdev=706.35 00:25:10.080 lat (msec): min=149, max=3736, avg=2217.35, stdev=706.56 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 226], 5.00th=[ 1150], 10.00th=[ 1653], 20.00th=[ 1737], 00:25:10.080 | 30.00th=[ 1804], 40.00th=[ 1854], 50.00th=[ 1955], 60.00th=[ 2265], 00:25:10.080 | 70.00th=[ 2601], 80.00th=[ 2970], 90.00th=[ 3171], 95.00th=[ 3373], 00:25:10.080 | 99.00th=[ 3574], 99.50th=[ 3574], 99.90th=[ 3675], 99.95th=[ 3675], 00:25:10.080 | 99.99th=[ 3675] 00:25:10.080 bw ( KiB/s): min=22528, max=88064, per=1.17%, avg=49754.35, stdev=18919.97, samples=17 00:25:10.080 iops : min= 22, max= 86, avg=48.59, stdev=18.48, samples=17 00:25:10.080 lat (msec) : 250=1.11%, 500=1.48%, 750=0.55%, 1000=1.29%, 2000=47.13% 00:25:10.080 lat (msec) : >=2000=48.43% 00:25:10.080 cpu : usr=0.01%, sys=1.12%, ctx=819, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.4% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.080 issued rwts: total=541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.080 job2: (groupid=0, jobs=1): err= 0: pid=424754: Tue Nov 19 01:09:14 2024 00:25:10.080 read: IOPS=60, BW=60.1MiB/s (63.0MB/s)(608MiB/10116msec) 00:25:10.080 slat (usec): min=41, max=148830, avg=16448.41, stdev=23887.84 00:25:10.080 clat (msec): min=111, max=3154, avg=1915.47, stdev=556.16 00:25:10.080 lat (msec): min=127, max=3191, avg=1931.92, stdev=555.08 00:25:10.080 clat percentiles (msec): 00:25:10.080 | 1.00th=[ 414], 5.00th=[ 969], 10.00th=[ 1318], 20.00th=[ 1569], 00:25:10.080 | 30.00th=[ 1720], 40.00th=[ 1787], 50.00th=[ 1854], 60.00th=[ 1938], 00:25:10.080 | 70.00th=[ 2106], 80.00th=[ 2333], 90.00th=[ 2769], 95.00th=[ 2970], 00:25:10.080 | 99.00th=[ 3138], 99.50th=[ 3138], 99.90th=[ 3171], 99.95th=[ 3171], 00:25:10.080 | 99.99th=[ 3171] 00:25:10.080 bw ( KiB/s): min=26624, max=112865, per=1.55%, avg=65724.07, stdev=24674.94, samples=15 00:25:10.080 iops : min= 26, max= 110, avg=64.13, stdev=24.05, samples=15 00:25:10.080 lat (msec) : 250=0.66%, 500=0.66%, 750=1.48%, 1000=2.47%, 2000=59.38% 00:25:10.080 lat (msec) : >=2000=35.36% 00:25:10.080 cpu : usr=0.07%, sys=1.34%, ctx=1081, majf=0, minf=32769 00:25:10.080 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:25:10.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.080 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.080 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.081 job2: (groupid=0, jobs=1): err= 0: pid=424755: Tue Nov 19 01:09:14 2024 00:25:10.081 read: IOPS=57, BW=58.0MiB/s (60.8MB/s)(587MiB/10128msec) 00:25:10.081 slat (usec): min=58, max=180127, avg=17034.99, stdev=36304.34 00:25:10.081 clat (msec): min=124, max=3621, avg=2087.83, stdev=809.97 00:25:10.081 lat (msec): min=128, max=3627, avg=2104.87, stdev=812.45 00:25:10.081 clat percentiles (msec): 00:25:10.081 | 1.00th=[ 251], 5.00th=[ 651], 10.00th=[ 1351], 20.00th=[ 1586], 00:25:10.081 | 30.00th=[ 1620], 40.00th=[ 1653], 50.00th=[ 1821], 60.00th=[ 2198], 00:25:10.081 | 70.00th=[ 2534], 80.00th=[ 3104], 90.00th=[ 3306], 95.00th=[ 3406], 00:25:10.081 | 99.00th=[ 3540], 99.50th=[ 3574], 99.90th=[ 3608], 99.95th=[ 3608], 00:25:10.081 | 99.99th=[ 3608] 00:25:10.081 bw ( KiB/s): min=12288, max=96256, per=1.23%, avg=52328.00, stdev=21533.83, samples=18 00:25:10.081 iops : min= 12, max= 94, avg=51.06, stdev=20.95, samples=18 00:25:10.081 lat (msec) : 250=0.85%, 500=1.53%, 750=3.24%, 1000=2.04%, 2000=48.55% 00:25:10.081 lat (msec) : >=2000=43.78% 00:25:10.081 cpu : usr=0.00%, sys=1.49%, ctx=852, majf=0, minf=32769 00:25:10.081 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.3% 00:25:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.081 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.081 issued rwts: total=587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.081 job2: (groupid=0, jobs=1): err= 0: pid=424756: Tue Nov 19 01:09:14 2024 00:25:10.081 read: IOPS=51, BW=51.7MiB/s (54.2MB/s)(523MiB/10120msec) 00:25:10.081 slat (usec): min=28, max=195295, avg=19116.56, stdev=37553.69 00:25:10.081 clat (msec): min=118, max=3801, avg=2014.51, stdev=704.83 00:25:10.081 lat (msec): min=127, max=3864, avg=2033.63, stdev=708.13 00:25:10.081 clat percentiles (msec): 00:25:10.081 | 1.00th=[ 255], 5.00th=[ 418], 10.00th=[ 919], 20.00th=[ 1804], 00:25:10.081 | 30.00th=[ 2005], 40.00th=[ 2056], 50.00th=[ 2089], 60.00th=[ 2140], 00:25:10.081 | 70.00th=[ 2198], 80.00th=[ 2265], 90.00th=[ 2635], 95.00th=[ 3406], 00:25:10.081 | 99.00th=[ 3775], 99.50th=[ 3809], 99.90th=[ 3809], 99.95th=[ 3809], 00:25:10.081 | 99.99th=[ 3809] 00:25:10.081 bw ( KiB/s): min= 4096, max=86016, per=1.36%, avg=57920.07, stdev=22753.56, samples=14 00:25:10.081 iops : min= 4, max= 84, avg=56.43, stdev=22.13, samples=14 00:25:10.081 lat (msec) : 250=0.76%, 500=5.54%, 750=1.91%, 1000=2.49%, 2000=19.31% 00:25:10.081 lat (msec) : >=2000=69.98% 00:25:10.081 cpu : usr=0.00%, sys=1.15%, ctx=810, majf=0, minf=32769 00:25:10.081 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=88.0% 00:25:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.081 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.081 issued rwts: total=523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.081 job2: (groupid=0, jobs=1): err= 0: pid=424757: Tue Nov 19 01:09:14 2024 00:25:10.081 read: IOPS=59, BW=59.4MiB/s (62.3MB/s)(605MiB/10187msec) 00:25:10.081 slat (usec): min=40, max=164400, avg=16567.56, stdev=31075.35 00:25:10.081 clat (msec): min=160, max=2781, avg=1954.88, stdev=483.83 00:25:10.081 lat (msec): min=292, max=2783, avg=1971.44, stdev=481.80 00:25:10.081 clat percentiles (msec): 00:25:10.081 | 1.00th=[ 468], 5.00th=[ 1200], 10.00th=[ 1301], 20.00th=[ 1452], 00:25:10.081 | 30.00th=[ 1670], 40.00th=[ 1955], 50.00th=[ 2106], 60.00th=[ 2198], 00:25:10.081 | 70.00th=[ 2299], 80.00th=[ 2366], 90.00th=[ 2467], 95.00th=[ 2534], 00:25:10.081 | 99.00th=[ 2668], 99.50th=[ 2668], 99.90th=[ 2769], 99.95th=[ 2769], 00:25:10.081 | 99.99th=[ 2769] 00:25:10.081 bw ( KiB/s): min=12288, max=112640, per=1.35%, avg=57447.35, stdev=30310.20, samples=17 00:25:10.081 iops : min= 12, max= 110, avg=56.00, stdev=29.55, samples=17 00:25:10.081 lat (msec) : 250=0.17%, 500=1.49%, 750=0.50%, 1000=1.82%, 2000=37.85% 00:25:10.081 lat (msec) : >=2000=58.18% 00:25:10.081 cpu : usr=0.02%, sys=1.27%, ctx=1058, majf=0, minf=32769 00:25:10.081 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:25:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.081 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.081 issued rwts: total=605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.081 job2: (groupid=0, jobs=1): err= 0: pid=424758: Tue Nov 19 01:09:14 2024 00:25:10.081 read: IOPS=68, BW=68.6MiB/s (71.9MB/s)(691MiB/10074msec) 00:25:10.081 slat (usec): min=31, max=157226, avg=14469.72, stdev=31236.55 00:25:10.081 clat (msec): min=72, max=3563, avg=1637.96, stdev=706.11 00:25:10.081 lat (msec): min=75, max=3564, avg=1652.43, stdev=706.21 00:25:10.081 clat percentiles (msec): 00:25:10.081 | 1.00th=[ 89], 5.00th=[ 936], 10.00th=[ 1045], 20.00th=[ 1167], 00:25:10.081 | 30.00th=[ 1318], 40.00th=[ 1385], 50.00th=[ 1452], 60.00th=[ 1519], 00:25:10.081 | 70.00th=[ 1586], 80.00th=[ 1888], 90.00th=[ 3004], 95.00th=[ 3239], 00:25:10.081 | 99.00th=[ 3473], 99.50th=[ 3574], 99.90th=[ 3574], 99.95th=[ 3574], 00:25:10.081 | 99.99th=[ 3574] 00:25:10.081 bw ( KiB/s): min=16384, max=155337, per=1.60%, avg=67919.18, stdev=41810.10, samples=17 00:25:10.081 iops : min= 16, max= 151, avg=66.24, stdev=40.80, samples=17 00:25:10.081 lat (msec) : 100=1.01%, 250=0.29%, 500=0.29%, 750=1.88%, 1000=5.35% 00:25:10.081 lat (msec) : 2000=72.65%, >=2000=18.52% 00:25:10.081 cpu : usr=0.04%, sys=1.29%, ctx=931, majf=0, minf=32769 00:25:10.081 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:25:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.081 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.081 issued rwts: total=691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.081 job2: (groupid=0, jobs=1): err= 0: pid=424759: Tue Nov 19 01:09:14 2024 00:25:10.081 read: IOPS=46, BW=46.1MiB/s (48.3MB/s)(466MiB/10116msec) 00:25:10.081 slat (usec): min=34, max=248425, avg=21481.01, stdev=39621.69 00:25:10.081 clat (msec): min=103, max=3453, avg=2419.29, stdev=828.37 00:25:10.081 lat (msec): min=125, max=3502, avg=2440.77, stdev=829.04 00:25:10.081 clat percentiles (msec): 00:25:10.081 | 1.00th=[ 140], 5.00th=[ 405], 10.00th=[ 944], 20.00th=[ 2106], 00:25:10.081 | 30.00th=[ 2366], 40.00th=[ 2534], 50.00th=[ 2668], 60.00th=[ 2802], 00:25:10.081 | 70.00th=[ 2937], 80.00th=[ 3004], 90.00th=[ 3138], 95.00th=[ 3339], 00:25:10.081 | 99.00th=[ 3406], 99.50th=[ 3440], 99.90th=[ 3440], 99.95th=[ 3440], 00:25:10.081 | 99.99th=[ 3440] 00:25:10.081 bw ( KiB/s): min=14336, max=81920, per=1.02%, avg=43264.00, stdev=16054.28, samples=16 00:25:10.081 iops : min= 14, max= 80, avg=42.25, stdev=15.68, samples=16 00:25:10.081 lat (msec) : 250=2.36%, 500=3.65%, 750=2.36%, 1000=2.36%, 2000=9.23% 00:25:10.081 lat (msec) : >=2000=80.04% 00:25:10.081 cpu : usr=0.03%, sys=1.06%, ctx=948, majf=0, minf=32769 00:25:10.081 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.5% 00:25:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.081 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.081 issued rwts: total=466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.081 job2: (groupid=0, jobs=1): err= 0: pid=424760: Tue Nov 19 01:09:14 2024 00:25:10.081 read: IOPS=47, BW=47.9MiB/s (50.3MB/s)(485MiB/10118msec) 00:25:10.081 slat (usec): min=38, max=153800, avg=20651.22, stdev=32544.36 00:25:10.081 clat (msec): min=99, max=3141, avg=2306.30, stdev=822.39 00:25:10.081 lat (msec): min=151, max=3178, avg=2326.95, stdev=824.91 00:25:10.081 clat percentiles (msec): 00:25:10.081 | 1.00th=[ 165], 5.00th=[ 550], 10.00th=[ 827], 20.00th=[ 1351], 00:25:10.081 | 30.00th=[ 2333], 40.00th=[ 2668], 50.00th=[ 2735], 60.00th=[ 2769], 00:25:10.081 | 70.00th=[ 2802], 80.00th=[ 2869], 90.00th=[ 2937], 95.00th=[ 2970], 00:25:10.081 | 99.00th=[ 3104], 99.50th=[ 3104], 99.90th=[ 3138], 99.95th=[ 3138], 00:25:10.081 | 99.99th=[ 3138] 00:25:10.081 bw ( KiB/s): min=20480, max=102400, per=1.15%, avg=48718.73, stdev=19372.09, samples=15 00:25:10.081 iops : min= 20, max= 100, avg=47.53, stdev=18.89, samples=15 00:25:10.081 lat (msec) : 100=0.21%, 250=1.44%, 500=1.44%, 750=6.39%, 1000=6.39% 00:25:10.081 lat (msec) : 2000=8.04%, >=2000=76.08% 00:25:10.081 cpu : usr=0.02%, sys=1.15%, ctx=857, majf=0, minf=32769 00:25:10.081 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.6%, >=64=87.0% 00:25:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.081 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.081 issued rwts: total=485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.081 job2: (groupid=0, jobs=1): err= 0: pid=424761: Tue Nov 19 01:09:14 2024 00:25:10.081 read: IOPS=58, BW=58.5MiB/s (61.3MB/s)(593MiB/10141msec) 00:25:10.081 slat (usec): min=34, max=218616, avg=16859.48, stdev=30914.52 00:25:10.081 clat (msec): min=140, max=2870, avg=1870.26, stdev=683.42 00:25:10.081 lat (msec): min=140, max=2872, avg=1887.12, stdev=685.09 00:25:10.081 clat percentiles (msec): 00:25:10.081 | 1.00th=[ 146], 5.00th=[ 485], 10.00th=[ 818], 20.00th=[ 1435], 00:25:10.081 | 30.00th=[ 1569], 40.00th=[ 1737], 50.00th=[ 1888], 60.00th=[ 1972], 00:25:10.081 | 70.00th=[ 2265], 80.00th=[ 2635], 90.00th=[ 2769], 95.00th=[ 2802], 00:25:10.081 | 99.00th=[ 2869], 99.50th=[ 2869], 99.90th=[ 2869], 99.95th=[ 2869], 00:25:10.081 | 99.99th=[ 2869] 00:25:10.081 bw ( KiB/s): min=28672, max=122880, per=1.50%, avg=63624.53, stdev=23801.48, samples=15 00:25:10.081 iops : min= 28, max= 120, avg=62.13, stdev=23.24, samples=15 00:25:10.081 lat (msec) : 250=1.01%, 500=5.23%, 750=2.53%, 1000=3.71%, 2000=48.57% 00:25:10.081 lat (msec) : >=2000=38.95% 00:25:10.081 cpu : usr=0.03%, sys=1.19%, ctx=830, majf=0, minf=32769 00:25:10.081 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:25:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.081 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.081 issued rwts: total=593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.081 job2: (groupid=0, jobs=1): err= 0: pid=424762: Tue Nov 19 01:09:14 2024 00:25:10.081 read: IOPS=69, BW=69.5MiB/s (72.9MB/s)(703MiB/10117msec) 00:25:10.081 slat (usec): min=30, max=159311, avg=14224.69, stdev=27960.62 00:25:10.081 clat (msec): min=113, max=2091, avg=1686.20, stdev=346.92 00:25:10.081 lat (msec): min=152, max=2107, avg=1700.43, stdev=344.97 00:25:10.081 clat percentiles (msec): 00:25:10.081 | 1.00th=[ 279], 5.00th=[ 961], 10.00th=[ 1385], 20.00th=[ 1519], 00:25:10.081 | 30.00th=[ 1636], 40.00th=[ 1720], 50.00th=[ 1770], 60.00th=[ 1838], 00:25:10.081 | 70.00th=[ 1888], 80.00th=[ 1921], 90.00th=[ 1972], 95.00th=[ 2005], 00:25:10.081 | 99.00th=[ 2056], 99.50th=[ 2072], 99.90th=[ 2089], 99.95th=[ 2089], 00:25:10.081 | 99.99th=[ 2089] 00:25:10.081 bw ( KiB/s): min=32768, max=129024, per=1.63%, avg=69401.65, stdev=25101.32, samples=17 00:25:10.081 iops : min= 32, max= 126, avg=67.76, stdev=24.50, samples=17 00:25:10.081 lat (msec) : 250=1.00%, 500=1.71%, 750=1.00%, 1000=1.85%, 2000=89.05% 00:25:10.081 lat (msec) : >=2000=5.41% 00:25:10.081 cpu : usr=0.01%, sys=1.60%, ctx=923, majf=0, minf=32769 00:25:10.081 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:25:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.081 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.081 issued rwts: total=703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.081 job3: (groupid=0, jobs=1): err= 0: pid=424763: Tue Nov 19 01:09:14 2024 00:25:10.081 read: IOPS=65, BW=65.6MiB/s (68.8MB/s)(664MiB/10120msec) 00:25:10.081 slat (usec): min=34, max=283196, avg=15057.54, stdev=35661.32 00:25:10.081 clat (msec): min=119, max=3105, avg=1729.41, stdev=877.16 00:25:10.081 lat (msec): min=121, max=3107, avg=1744.47, stdev=882.21 00:25:10.081 clat percentiles (msec): 00:25:10.081 | 1.00th=[ 130], 5.00th=[ 502], 10.00th=[ 911], 20.00th=[ 995], 00:25:10.081 | 30.00th=[ 1083], 40.00th=[ 1200], 50.00th=[ 1368], 60.00th=[ 1938], 00:25:10.081 | 70.00th=[ 2668], 80.00th=[ 2769], 90.00th=[ 2903], 95.00th=[ 2970], 00:25:10.081 | 99.00th=[ 3071], 99.50th=[ 3104], 99.90th=[ 3104], 99.95th=[ 3104], 00:25:10.081 | 99.99th=[ 3104] 00:25:10.081 bw ( KiB/s): min=28672, max=143360, per=1.72%, avg=73312.07, stdev=40242.73, samples=15 00:25:10.081 iops : min= 28, max= 140, avg=71.53, stdev=39.34, samples=15 00:25:10.081 lat (msec) : 250=2.41%, 500=2.56%, 750=2.56%, 1000=13.10%, 2000=39.61% 00:25:10.081 lat (msec) : >=2000=39.76% 00:25:10.081 cpu : usr=0.05%, sys=1.23%, ctx=964, majf=0, minf=32769 00:25:10.081 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:25:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.081 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.081 issued rwts: total=664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.081 job3: (groupid=0, jobs=1): err= 0: pid=424764: Tue Nov 19 01:09:14 2024 00:25:10.081 read: IOPS=42, BW=42.9MiB/s (45.0MB/s)(433MiB/10085msec) 00:25:10.081 slat (usec): min=61, max=208944, avg=23092.82, stdev=38525.83 00:25:10.081 clat (msec): min=83, max=3931, avg=2629.94, stdev=804.33 00:25:10.081 lat (msec): min=93, max=3956, avg=2653.03, stdev=801.53 00:25:10.081 clat percentiles (msec): 00:25:10.081 | 1.00th=[ 146], 5.00th=[ 827], 10.00th=[ 1821], 20.00th=[ 2140], 00:25:10.081 | 30.00th=[ 2265], 40.00th=[ 2433], 50.00th=[ 2668], 60.00th=[ 2802], 00:25:10.081 | 70.00th=[ 3138], 80.00th=[ 3440], 90.00th=[ 3574], 95.00th=[ 3708], 00:25:10.081 | 99.00th=[ 3910], 99.50th=[ 3910], 99.90th=[ 3943], 99.95th=[ 3943], 00:25:10.081 | 99.99th=[ 3943] 00:25:10.081 bw ( KiB/s): min= 2048, max=75776, per=0.92%, avg=39172.81, stdev=19107.90, samples=16 00:25:10.081 iops : min= 2, max= 74, avg=38.25, stdev=18.66, samples=16 00:25:10.081 lat (msec) : 100=0.46%, 250=1.15%, 500=1.39%, 750=1.39%, 1000=1.39% 00:25:10.081 lat (msec) : 2000=6.47%, >=2000=87.76% 00:25:10.081 cpu : usr=0.01%, sys=1.25%, ctx=738, majf=0, minf=32769 00:25:10.081 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:25:10.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.081 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.081 issued rwts: total=433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.081 job3: (groupid=0, jobs=1): err= 0: pid=424765: Tue Nov 19 01:09:14 2024 00:25:10.081 read: IOPS=60, BW=60.0MiB/s (63.0MB/s)(606MiB/10094msec) 00:25:10.081 slat (usec): min=39, max=172338, avg=16558.62, stdev=31872.89 00:25:10.081 clat (msec): min=56, max=3682, avg=1960.42, stdev=1000.29 00:25:10.081 lat (msec): min=172, max=3698, avg=1976.98, stdev=1005.22 00:25:10.081 clat percentiles (msec): 00:25:10.081 | 1.00th=[ 197], 5.00th=[ 477], 10.00th=[ 802], 20.00th=[ 1217], 00:25:10.081 | 30.00th=[ 1284], 40.00th=[ 1385], 50.00th=[ 1569], 60.00th=[ 2165], 00:25:10.081 | 70.00th=[ 2702], 80.00th=[ 3205], 90.00th=[ 3406], 95.00th=[ 3473], 00:25:10.081 | 99.00th=[ 3641], 99.50th=[ 3675], 99.90th=[ 3675], 99.95th=[ 3675], 00:25:10.081 | 99.99th=[ 3675] 00:25:10.081 bw ( KiB/s): min=24576, max=116736, per=1.28%, avg=54346.06, stdev=29265.45, samples=18 00:25:10.081 iops : min= 24, max= 114, avg=52.89, stdev=28.66, samples=18 00:25:10.081 lat (msec) : 100=0.17%, 250=1.65%, 500=5.12%, 750=2.64%, 1000=4.79% 00:25:10.082 lat (msec) : 2000=43.56%, >=2000=42.08% 00:25:10.082 cpu : usr=0.04%, sys=1.23%, ctx=898, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.082 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job3: (groupid=0, jobs=1): err= 0: pid=424766: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=44, BW=44.8MiB/s (47.0MB/s)(452MiB/10090msec) 00:25:10.082 slat (usec): min=39, max=166530, avg=22121.47, stdev=36658.97 00:25:10.082 clat (msec): min=89, max=3976, avg=2665.77, stdev=948.01 00:25:10.082 lat (msec): min=114, max=4011, avg=2687.89, stdev=946.71 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 144], 5.00th=[ 550], 10.00th=[ 1133], 20.00th=[ 2072], 00:25:10.082 | 30.00th=[ 2400], 40.00th=[ 2567], 50.00th=[ 2735], 60.00th=[ 3004], 00:25:10.082 | 70.00th=[ 3306], 80.00th=[ 3473], 90.00th=[ 3809], 95.00th=[ 3876], 00:25:10.082 | 99.00th=[ 3943], 99.50th=[ 3943], 99.90th=[ 3977], 99.95th=[ 3977], 00:25:10.082 | 99.99th=[ 3977] 00:25:10.082 bw ( KiB/s): min=12288, max=67584, per=0.87%, avg=36981.61, stdev=15332.87, samples=18 00:25:10.082 iops : min= 12, max= 66, avg=36.11, stdev=14.97, samples=18 00:25:10.082 lat (msec) : 100=0.22%, 250=1.33%, 500=2.43%, 750=3.76%, 1000=1.99% 00:25:10.082 lat (msec) : 2000=8.85%, >=2000=81.42% 00:25:10.082 cpu : usr=0.00%, sys=1.28%, ctx=866, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.1% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.082 issued rwts: total=452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job3: (groupid=0, jobs=1): err= 0: pid=424767: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=50, BW=50.8MiB/s (53.3MB/s)(512MiB/10071msec) 00:25:10.082 slat (usec): min=57, max=276335, avg=19553.16, stdev=34225.82 00:25:10.082 clat (msec): min=56, max=3102, avg=2317.00, stdev=697.56 00:25:10.082 lat (msec): min=182, max=3121, avg=2336.55, stdev=698.19 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 271], 5.00th=[ 584], 10.00th=[ 1217], 20.00th=[ 1888], 00:25:10.082 | 30.00th=[ 2299], 40.00th=[ 2400], 50.00th=[ 2467], 60.00th=[ 2601], 00:25:10.082 | 70.00th=[ 2802], 80.00th=[ 2903], 90.00th=[ 2970], 95.00th=[ 3004], 00:25:10.082 | 99.00th=[ 3071], 99.50th=[ 3104], 99.90th=[ 3104], 99.95th=[ 3104], 00:25:10.082 | 99.99th=[ 3104] 00:25:10.082 bw ( KiB/s): min=20480, max=79872, per=1.09%, avg=46256.35, stdev=15933.41, samples=17 00:25:10.082 iops : min= 20, max= 78, avg=45.12, stdev=15.60, samples=17 00:25:10.082 lat (msec) : 100=0.20%, 250=0.39%, 500=3.52%, 750=1.56%, 1000=1.56% 00:25:10.082 lat (msec) : 2000=15.62%, >=2000=77.15% 00:25:10.082 cpu : usr=0.05%, sys=1.26%, ctx=1043, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.082 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job3: (groupid=0, jobs=1): err= 0: pid=424768: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=56, BW=56.8MiB/s (59.6MB/s)(574MiB/10105msec) 00:25:10.082 slat (usec): min=38, max=177114, avg=17460.93, stdev=30315.11 00:25:10.082 clat (msec): min=78, max=3487, avg=1907.74, stdev=936.78 00:25:10.082 lat (msec): min=143, max=3499, avg=1925.20, stdev=939.74 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 186], 5.00th=[ 709], 10.00th=[ 852], 20.00th=[ 911], 00:25:10.082 | 30.00th=[ 1070], 40.00th=[ 1418], 50.00th=[ 1972], 60.00th=[ 2299], 00:25:10.082 | 70.00th=[ 2467], 80.00th=[ 2937], 90.00th=[ 3272], 95.00th=[ 3373], 00:25:10.082 | 99.00th=[ 3406], 99.50th=[ 3440], 99.90th=[ 3473], 99.95th=[ 3473], 00:25:10.082 | 99.99th=[ 3473] 00:25:10.082 bw ( KiB/s): min=14336, max=159744, per=1.43%, avg=60893.87, stdev=44919.26, samples=15 00:25:10.082 iops : min= 14, max= 156, avg=59.47, stdev=43.87, samples=15 00:25:10.082 lat (msec) : 100=0.17%, 250=1.74%, 500=1.92%, 750=1.39%, 1000=20.91% 00:25:10.082 lat (msec) : 2000=24.91%, >=2000=48.95% 00:25:10.082 cpu : usr=0.02%, sys=1.27%, ctx=997, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.082 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job3: (groupid=0, jobs=1): err= 0: pid=424769: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=41, BW=41.1MiB/s (43.1MB/s)(416MiB/10127msec) 00:25:10.082 slat (usec): min=29, max=158651, avg=24105.30, stdev=32575.35 00:25:10.082 clat (msec): min=96, max=4031, avg=2655.24, stdev=1175.55 00:25:10.082 lat (msec): min=247, max=4039, avg=2679.35, stdev=1179.34 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 253], 5.00th=[ 426], 10.00th=[ 600], 20.00th=[ 1401], 00:25:10.082 | 30.00th=[ 2433], 40.00th=[ 2769], 50.00th=[ 3037], 60.00th=[ 3406], 00:25:10.082 | 70.00th=[ 3507], 80.00th=[ 3675], 90.00th=[ 3775], 95.00th=[ 3910], 00:25:10.082 | 99.00th=[ 3977], 99.50th=[ 4010], 99.90th=[ 4044], 99.95th=[ 4044], 00:25:10.082 | 99.99th=[ 4044] 00:25:10.082 bw ( KiB/s): min=18432, max=104448, per=0.92%, avg=39317.73, stdev=20703.30, samples=15 00:25:10.082 iops : min= 18, max= 102, avg=38.33, stdev=20.25, samples=15 00:25:10.082 lat (msec) : 100=0.24%, 250=0.72%, 500=6.73%, 750=6.73%, 1000=3.85% 00:25:10.082 lat (msec) : 2000=7.45%, >=2000=74.28% 00:25:10.082 cpu : usr=0.00%, sys=1.01%, ctx=712, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.7%, >=64=84.9% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.082 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job3: (groupid=0, jobs=1): err= 0: pid=424770: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=44, BW=44.9MiB/s (47.1MB/s)(457MiB/10184msec) 00:25:10.082 slat (usec): min=51, max=218652, avg=22012.41, stdev=35103.24 00:25:10.082 clat (msec): min=121, max=3933, avg=2689.38, stdev=1062.90 00:25:10.082 lat (msec): min=201, max=3942, avg=2711.40, stdev=1065.59 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 271], 5.00th=[ 709], 10.00th=[ 1267], 20.00th=[ 1653], 00:25:10.082 | 30.00th=[ 1888], 40.00th=[ 2500], 50.00th=[ 3071], 60.00th=[ 3440], 00:25:10.082 | 70.00th=[ 3540], 80.00th=[ 3742], 90.00th=[ 3809], 95.00th=[ 3876], 00:25:10.082 | 99.00th=[ 3910], 99.50th=[ 3910], 99.90th=[ 3943], 99.95th=[ 3943], 00:25:10.082 | 99.99th=[ 3943] 00:25:10.082 bw ( KiB/s): min=10240, max=71680, per=0.88%, avg=37422.39, stdev=15539.34, samples=18 00:25:10.082 iops : min= 10, max= 70, avg=36.44, stdev=15.11, samples=18 00:25:10.082 lat (msec) : 250=0.88%, 500=1.97%, 750=2.63%, 1000=2.63%, 2000=26.91% 00:25:10.082 lat (msec) : >=2000=64.99% 00:25:10.082 cpu : usr=0.04%, sys=1.29%, ctx=819, majf=0, minf=32394 00:25:10.082 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.0%, >=64=86.2% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.082 issued rwts: total=457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job3: (groupid=0, jobs=1): err= 0: pid=424771: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=51, BW=51.6MiB/s (54.1MB/s)(524MiB/10163msec) 00:25:10.082 slat (usec): min=41, max=194405, avg=19080.53, stdev=25254.56 00:25:10.082 clat (msec): min=161, max=3486, avg=2111.34, stdev=992.51 00:25:10.082 lat (msec): min=163, max=3537, avg=2130.42, stdev=998.13 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 194], 5.00th=[ 456], 10.00th=[ 718], 20.00th=[ 1318], 00:25:10.082 | 30.00th=[ 1418], 40.00th=[ 1569], 50.00th=[ 1955], 60.00th=[ 2802], 00:25:10.082 | 70.00th=[ 3004], 80.00th=[ 3171], 90.00th=[ 3272], 95.00th=[ 3373], 00:25:10.082 | 99.00th=[ 3406], 99.50th=[ 3473], 99.90th=[ 3473], 99.95th=[ 3473], 00:25:10.082 | 99.99th=[ 3473] 00:25:10.082 bw ( KiB/s): min= 2048, max=96256, per=1.27%, avg=54199.60, stdev=28506.58, samples=15 00:25:10.082 iops : min= 2, max= 94, avg=52.87, stdev=27.89, samples=15 00:25:10.082 lat (msec) : 250=2.10%, 500=3.24%, 750=5.53%, 1000=3.24%, 2000=35.88% 00:25:10.082 lat (msec) : >=2000=50.00% 00:25:10.082 cpu : usr=0.04%, sys=1.44%, ctx=810, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=88.0% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.082 issued rwts: total=524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job3: (groupid=0, jobs=1): err= 0: pid=424772: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=44, BW=44.8MiB/s (47.0MB/s)(453MiB/10110msec) 00:25:10.082 slat (usec): min=72, max=168088, avg=22085.26, stdev=24673.10 00:25:10.082 clat (msec): min=102, max=2970, avg=2394.11, stdev=578.52 00:25:10.082 lat (msec): min=124, max=3025, avg=2416.19, stdev=576.31 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 397], 5.00th=[ 978], 10.00th=[ 1452], 20.00th=[ 2265], 00:25:10.082 | 30.00th=[ 2433], 40.00th=[ 2500], 50.00th=[ 2601], 60.00th=[ 2668], 00:25:10.082 | 70.00th=[ 2735], 80.00th=[ 2769], 90.00th=[ 2836], 95.00th=[ 2869], 00:25:10.082 | 99.00th=[ 2937], 99.50th=[ 2970], 99.90th=[ 2970], 99.95th=[ 2970], 00:25:10.082 | 99.99th=[ 2970] 00:25:10.082 bw ( KiB/s): min=10240, max=61440, per=1.05%, avg=44509.87, stdev=13770.35, samples=15 00:25:10.082 iops : min= 10, max= 60, avg=43.47, stdev=13.45, samples=15 00:25:10.082 lat (msec) : 250=0.66%, 500=0.66%, 750=1.10%, 1000=2.87%, 2000=11.26% 00:25:10.082 lat (msec) : >=2000=83.44% 00:25:10.082 cpu : usr=0.03%, sys=1.23%, ctx=900, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.1% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.082 issued rwts: total=453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job3: (groupid=0, jobs=1): err= 0: pid=424773: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=67, BW=67.1MiB/s (70.3MB/s)(679MiB/10122msec) 00:25:10.082 slat (usec): min=35, max=235569, avg=14727.31, stdev=27975.48 00:25:10.082 clat (msec): min=118, max=3173, avg=1749.17, stdev=622.77 00:25:10.082 lat (msec): min=132, max=3192, avg=1763.90, stdev=624.01 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 218], 5.00th=[ 625], 10.00th=[ 1099], 20.00th=[ 1301], 00:25:10.082 | 30.00th=[ 1385], 40.00th=[ 1552], 50.00th=[ 1720], 60.00th=[ 1871], 00:25:10.082 | 70.00th=[ 2039], 80.00th=[ 2165], 90.00th=[ 2668], 95.00th=[ 2903], 00:25:10.082 | 99.00th=[ 3071], 99.50th=[ 3138], 99.90th=[ 3171], 99.95th=[ 3171], 00:25:10.082 | 99.99th=[ 3171] 00:25:10.082 bw ( KiB/s): min= 4096, max=114688, per=1.56%, avg=66503.35, stdev=30364.65, samples=17 00:25:10.082 iops : min= 4, max= 112, avg=64.94, stdev=29.66, samples=17 00:25:10.082 lat (msec) : 250=1.33%, 500=2.80%, 750=1.33%, 1000=2.80%, 2000=59.65% 00:25:10.082 lat (msec) : >=2000=32.11% 00:25:10.082 cpu : usr=0.02%, sys=1.43%, ctx=1020, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.082 issued rwts: total=679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job3: (groupid=0, jobs=1): err= 0: pid=424774: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=80, BW=80.2MiB/s (84.1MB/s)(814MiB/10151msec) 00:25:10.082 slat (usec): min=30, max=160846, avg=12315.02, stdev=29147.29 00:25:10.082 clat (msec): min=123, max=1953, avg=1489.18, stdev=248.77 00:25:10.082 lat (msec): min=212, max=1957, avg=1501.50, stdev=247.43 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 498], 5.00th=[ 1011], 10.00th=[ 1250], 20.00th=[ 1385], 00:25:10.082 | 30.00th=[ 1435], 40.00th=[ 1469], 50.00th=[ 1519], 60.00th=[ 1569], 00:25:10.082 | 70.00th=[ 1603], 80.00th=[ 1653], 90.00th=[ 1754], 95.00th=[ 1804], 00:25:10.082 | 99.00th=[ 1871], 99.50th=[ 1905], 99.90th=[ 1955], 99.95th=[ 1955], 00:25:10.082 | 99.99th=[ 1955] 00:25:10.082 bw ( KiB/s): min=20480, max=124928, per=1.94%, avg=82634.35, stdev=22958.16, samples=17 00:25:10.082 iops : min= 20, max= 122, avg=80.65, stdev=22.45, samples=17 00:25:10.082 lat (msec) : 250=0.37%, 500=0.74%, 750=1.47%, 1000=1.97%, 2000=95.45% 00:25:10.082 cpu : usr=0.00%, sys=1.58%, ctx=764, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:10.082 issued rwts: total=814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job3: (groupid=0, jobs=1): err= 0: pid=424775: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=50, BW=50.1MiB/s (52.6MB/s)(507MiB/10116msec) 00:25:10.082 slat (usec): min=49, max=221180, avg=19834.86, stdev=40877.57 00:25:10.082 clat (msec): min=56, max=3765, avg=2342.23, stdev=829.40 00:25:10.082 lat (msec): min=173, max=3770, avg=2362.06, stdev=831.41 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 422], 5.00th=[ 810], 10.00th=[ 1183], 20.00th=[ 1720], 00:25:10.082 | 30.00th=[ 1938], 40.00th=[ 2140], 50.00th=[ 2333], 60.00th=[ 2567], 00:25:10.082 | 70.00th=[ 2869], 80.00th=[ 3205], 90.00th=[ 3440], 95.00th=[ 3574], 00:25:10.082 | 99.00th=[ 3675], 99.50th=[ 3775], 99.90th=[ 3775], 99.95th=[ 3775], 00:25:10.082 | 99.99th=[ 3775] 00:25:10.082 bw ( KiB/s): min=24576, max=94208, per=1.07%, avg=45658.35, stdev=16300.02, samples=17 00:25:10.082 iops : min= 24, max= 92, avg=44.59, stdev=15.92, samples=17 00:25:10.082 lat (msec) : 100=0.20%, 250=0.59%, 500=1.58%, 750=2.56%, 1000=1.78% 00:25:10.082 lat (msec) : 2000=27.42%, >=2000=65.88% 00:25:10.082 cpu : usr=0.00%, sys=1.30%, ctx=807, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.6% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.082 issued rwts: total=507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job4: (groupid=0, jobs=1): err= 0: pid=424776: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=51, BW=51.0MiB/s (53.5MB/s)(520MiB/10194msec) 00:25:10.082 slat (usec): min=35, max=164155, avg=19287.07, stdev=27472.83 00:25:10.082 clat (msec): min=161, max=4228, avg=2379.24, stdev=924.62 00:25:10.082 lat (msec): min=254, max=4230, avg=2398.52, stdev=925.55 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 368], 5.00th=[ 684], 10.00th=[ 1301], 20.00th=[ 1636], 00:25:10.082 | 30.00th=[ 1871], 40.00th=[ 2089], 50.00th=[ 2400], 60.00th=[ 2635], 00:25:10.082 | 70.00th=[ 2735], 80.00th=[ 3071], 90.00th=[ 3842], 95.00th=[ 4077], 00:25:10.082 | 99.00th=[ 4178], 99.50th=[ 4212], 99.90th=[ 4245], 99.95th=[ 4245], 00:25:10.082 | 99.99th=[ 4245] 00:25:10.082 bw ( KiB/s): min=16384, max=114688, per=1.05%, avg=44586.56, stdev=26149.12, samples=18 00:25:10.082 iops : min= 16, max= 112, avg=43.44, stdev=25.47, samples=18 00:25:10.082 lat (msec) : 250=0.19%, 500=1.15%, 750=5.00%, 1000=1.92%, 2000=27.12% 00:25:10.082 lat (msec) : >=2000=64.62% 00:25:10.082 cpu : usr=0.02%, sys=1.48%, ctx=956, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.9% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.082 issued rwts: total=520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job4: (groupid=0, jobs=1): err= 0: pid=424777: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=58, BW=58.3MiB/s (61.2MB/s)(588MiB/10081msec) 00:25:10.082 slat (usec): min=31, max=216339, avg=17023.50, stdev=35164.08 00:25:10.082 clat (msec): min=68, max=3572, avg=2039.76, stdev=692.38 00:25:10.082 lat (msec): min=91, max=3573, avg=2056.79, stdev=691.77 00:25:10.082 clat percentiles (msec): 00:25:10.082 | 1.00th=[ 109], 5.00th=[ 743], 10.00th=[ 1452], 20.00th=[ 1636], 00:25:10.082 | 30.00th=[ 1737], 40.00th=[ 1838], 50.00th=[ 1955], 60.00th=[ 2072], 00:25:10.082 | 70.00th=[ 2333], 80.00th=[ 2567], 90.00th=[ 3037], 95.00th=[ 3306], 00:25:10.082 | 99.00th=[ 3507], 99.50th=[ 3574], 99.90th=[ 3574], 99.95th=[ 3574], 00:25:10.082 | 99.99th=[ 3574] 00:25:10.082 bw ( KiB/s): min=26624, max=116736, per=1.30%, avg=55416.47, stdev=25960.68, samples=17 00:25:10.082 iops : min= 26, max= 114, avg=54.12, stdev=25.35, samples=17 00:25:10.082 lat (msec) : 100=0.68%, 250=1.36%, 500=1.70%, 750=1.36%, 1000=1.87% 00:25:10.082 lat (msec) : 2000=50.00%, >=2000=43.03% 00:25:10.082 cpu : usr=0.01%, sys=1.18%, ctx=813, majf=0, minf=32769 00:25:10.082 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3% 00:25:10.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.082 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.082 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.082 job4: (groupid=0, jobs=1): err= 0: pid=424778: Tue Nov 19 01:09:14 2024 00:25:10.082 read: IOPS=86, BW=86.0MiB/s (90.2MB/s)(867MiB/10081msec) 00:25:10.082 slat (usec): min=40, max=142349, avg=11534.66, stdev=22442.84 00:25:10.082 clat (msec): min=75, max=2618, avg=1260.24, stdev=443.73 00:25:10.082 lat (msec): min=123, max=2622, avg=1271.77, stdev=445.84 00:25:10.082 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 169], 5.00th=[ 584], 10.00th=[ 860], 20.00th=[ 902], 00:25:10.083 | 30.00th=[ 995], 40.00th=[ 1070], 50.00th=[ 1200], 60.00th=[ 1385], 00:25:10.083 | 70.00th=[ 1519], 80.00th=[ 1670], 90.00th=[ 1787], 95.00th=[ 1871], 00:25:10.083 | 99.00th=[ 2534], 99.50th=[ 2601], 99.90th=[ 2635], 99.95th=[ 2635], 00:25:10.083 | 99.99th=[ 2635] 00:25:10.083 bw ( KiB/s): min=38912, max=155648, per=2.38%, avg=101000.40, stdev=34387.73, samples=15 00:25:10.083 iops : min= 38, max= 152, avg=98.60, stdev=33.60, samples=15 00:25:10.083 lat (msec) : 100=0.12%, 250=2.08%, 500=2.65%, 750=1.73%, 1000=24.11% 00:25:10.083 lat (msec) : 2000=65.63%, >=2000=3.69% 00:25:10.083 cpu : usr=0.02%, sys=1.65%, ctx=1114, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.7% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:10.083 issued rwts: total=867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job4: (groupid=0, jobs=1): err= 0: pid=424779: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=40, BW=40.0MiB/s (42.0MB/s)(404MiB/10089msec) 00:25:10.083 slat (usec): min=46, max=256365, avg=24797.06, stdev=38748.73 00:25:10.083 clat (msec): min=68, max=5201, avg=2958.97, stdev=1412.50 00:25:10.083 lat (msec): min=123, max=5215, avg=2983.76, stdev=1416.50 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 409], 5.00th=[ 1250], 10.00th=[ 1284], 20.00th=[ 1401], 00:25:10.083 | 30.00th=[ 1838], 40.00th=[ 2198], 50.00th=[ 2802], 60.00th=[ 3406], 00:25:10.083 | 70.00th=[ 4178], 80.00th=[ 4597], 90.00th=[ 4866], 95.00th=[ 5000], 00:25:10.083 | 99.00th=[ 5134], 99.50th=[ 5201], 99.90th=[ 5201], 99.95th=[ 5201], 00:25:10.083 | 99.99th=[ 5201] 00:25:10.083 bw ( KiB/s): min= 6144, max=83968, per=0.74%, avg=31400.17, stdev=18170.40, samples=18 00:25:10.083 iops : min= 6, max= 82, avg=30.61, stdev=17.77, samples=18 00:25:10.083 lat (msec) : 100=0.25%, 250=0.25%, 500=0.99%, 750=0.99%, 1000=0.25% 00:25:10.083 lat (msec) : 2000=32.18%, >=2000=65.10% 00:25:10.083 cpu : usr=0.01%, sys=1.26%, ctx=889, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=7.9%, >=64=84.4% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:10.083 issued rwts: total=404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job4: (groupid=0, jobs=1): err= 0: pid=424780: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=57, BW=57.3MiB/s (60.1MB/s)(583MiB/10180msec) 00:25:10.083 slat (usec): min=44, max=345334, avg=17335.42, stdev=32431.13 00:25:10.083 clat (msec): min=70, max=5336, avg=2027.74, stdev=1198.89 00:25:10.083 lat (msec): min=307, max=5339, avg=2045.07, stdev=1201.88 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 502], 5.00th=[ 978], 10.00th=[ 1011], 20.00th=[ 1116], 00:25:10.083 | 30.00th=[ 1217], 40.00th=[ 1385], 50.00th=[ 1435], 60.00th=[ 1737], 00:25:10.083 | 70.00th=[ 2333], 80.00th=[ 2970], 90.00th=[ 4044], 95.00th=[ 4866], 00:25:10.083 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:25:10.083 | 99.99th=[ 5336] 00:25:10.083 bw ( KiB/s): min= 4096, max=135168, per=1.29%, avg=54814.12, stdev=43748.47, samples=17 00:25:10.083 iops : min= 4, max= 132, avg=53.53, stdev=42.72, samples=17 00:25:10.083 lat (msec) : 100=0.17%, 500=0.69%, 750=0.69%, 1000=6.69%, 2000=57.80% 00:25:10.083 lat (msec) : >=2000=33.96% 00:25:10.083 cpu : usr=0.02%, sys=1.36%, ctx=922, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.083 issued rwts: total=583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job4: (groupid=0, jobs=1): err= 0: pid=424781: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=29, BW=29.4MiB/s (30.8MB/s)(297MiB/10109msec) 00:25:10.083 slat (usec): min=737, max=158580, avg=33669.39, stdev=29168.46 00:25:10.083 clat (msec): min=107, max=5074, avg=3616.05, stdev=1537.90 00:25:10.083 lat (msec): min=191, max=5092, avg=3649.72, stdev=1539.57 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 207], 5.00th=[ 464], 10.00th=[ 978], 20.00th=[ 2022], 00:25:10.083 | 30.00th=[ 2937], 40.00th=[ 3910], 50.00th=[ 4279], 60.00th=[ 4665], 00:25:10.083 | 70.00th=[ 4866], 80.00th=[ 4933], 90.00th=[ 4933], 95.00th=[ 5000], 00:25:10.083 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:25:10.083 | 99.99th=[ 5067] 00:25:10.083 bw ( KiB/s): min=10240, max=36864, per=0.63%, avg=26781.54, stdev=8463.20, samples=13 00:25:10.083 iops : min= 10, max= 36, avg=26.15, stdev= 8.26, samples=13 00:25:10.083 lat (msec) : 250=1.68%, 500=3.70%, 750=1.68%, 1000=3.37%, 2000=9.09% 00:25:10.083 lat (msec) : >=2000=80.47% 00:25:10.083 cpu : usr=0.03%, sys=1.11%, ctx=839, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.8% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:25:10.083 issued rwts: total=297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job4: (groupid=0, jobs=1): err= 0: pid=424782: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=51, BW=52.0MiB/s (54.5MB/s)(526MiB/10117msec) 00:25:10.083 slat (usec): min=36, max=169553, avg=19084.39, stdev=31352.90 00:25:10.083 clat (msec): min=75, max=4737, avg=1944.42, stdev=1040.43 00:25:10.083 lat (msec): min=124, max=4752, avg=1963.51, stdev=1047.08 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 184], 5.00th=[ 894], 10.00th=[ 1133], 20.00th=[ 1351], 00:25:10.083 | 30.00th=[ 1385], 40.00th=[ 1418], 50.00th=[ 1485], 60.00th=[ 1603], 00:25:10.083 | 70.00th=[ 1905], 80.00th=[ 2802], 90.00th=[ 3842], 95.00th=[ 4329], 00:25:10.083 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4732], 99.95th=[ 4732], 00:25:10.083 | 99.99th=[ 4732] 00:25:10.083 bw ( KiB/s): min=14336, max=118784, per=1.60%, avg=67909.58, stdev=34686.01, samples=12 00:25:10.083 iops : min= 14, max= 116, avg=66.25, stdev=33.82, samples=12 00:25:10.083 lat (msec) : 100=0.19%, 250=1.14%, 500=0.76%, 750=0.38%, 1000=3.99% 00:25:10.083 lat (msec) : 2000=66.54%, >=2000=27.00% 00:25:10.083 cpu : usr=0.01%, sys=1.25%, ctx=871, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.083 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job4: (groupid=0, jobs=1): err= 0: pid=424783: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=25, BW=25.3MiB/s (26.6MB/s)(257MiB/10147msec) 00:25:10.083 slat (usec): min=425, max=193512, avg=38921.69, stdev=32325.69 00:25:10.083 clat (msec): min=142, max=5815, avg=3504.79, stdev=1723.98 00:25:10.083 lat (msec): min=175, max=5857, avg=3543.71, stdev=1731.69 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 197], 5.00th=[ 405], 10.00th=[ 751], 20.00th=[ 1636], 00:25:10.083 | 30.00th=[ 2400], 40.00th=[ 3272], 50.00th=[ 4212], 60.00th=[ 4530], 00:25:10.083 | 70.00th=[ 4732], 80.00th=[ 5134], 90.00th=[ 5403], 95.00th=[ 5604], 00:25:10.083 | 99.00th=[ 5738], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:25:10.083 | 99.99th=[ 5805] 00:25:10.083 bw ( KiB/s): min=12312, max=40960, per=0.70%, avg=29596.22, stdev=9547.10, samples=9 00:25:10.083 iops : min= 12, max= 40, avg=28.89, stdev= 9.33, samples=9 00:25:10.083 lat (msec) : 250=2.33%, 500=3.50%, 750=3.89%, 1000=3.89%, 2000=12.45% 00:25:10.083 lat (msec) : >=2000=73.93% 00:25:10.083 cpu : usr=0.02%, sys=0.93%, ctx=840, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.2%, 32=12.5%, >=64=75.5% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:25:10.083 issued rwts: total=257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job4: (groupid=0, jobs=1): err= 0: pid=424784: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=56, BW=56.4MiB/s (59.1MB/s)(571MiB/10127msec) 00:25:10.083 slat (usec): min=41, max=236433, avg=17510.74, stdev=38657.37 00:25:10.083 clat (msec): min=126, max=4450, avg=2145.37, stdev=1159.58 00:25:10.083 lat (msec): min=128, max=4466, avg=2162.88, stdev=1165.32 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 136], 5.00th=[ 305], 10.00th=[ 810], 20.00th=[ 1435], 00:25:10.083 | 30.00th=[ 1502], 40.00th=[ 1620], 50.00th=[ 1720], 60.00th=[ 1921], 00:25:10.083 | 70.00th=[ 2567], 80.00th=[ 3473], 90.00th=[ 4044], 95.00th=[ 4245], 00:25:10.083 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4463], 99.95th=[ 4463], 00:25:10.083 | 99.99th=[ 4463] 00:25:10.083 bw ( KiB/s): min=12288, max=118784, per=1.19%, avg=50517.33, stdev=33732.63, samples=18 00:25:10.083 iops : min= 12, max= 116, avg=49.33, stdev=32.94, samples=18 00:25:10.083 lat (msec) : 250=2.63%, 500=4.38%, 750=2.45%, 1000=1.75%, 2000=50.26% 00:25:10.083 lat (msec) : >=2000=38.53% 00:25:10.083 cpu : usr=0.01%, sys=1.43%, ctx=856, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.083 issued rwts: total=571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job4: (groupid=0, jobs=1): err= 0: pid=424785: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=35, BW=35.4MiB/s (37.1MB/s)(360MiB/10170msec) 00:25:10.083 slat (usec): min=36, max=209475, avg=27809.79, stdev=33379.93 00:25:10.083 clat (msec): min=155, max=5816, avg=3109.60, stdev=1360.75 00:25:10.083 lat (msec): min=175, max=5831, avg=3137.41, stdev=1358.02 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 203], 5.00th=[ 726], 10.00th=[ 1586], 20.00th=[ 2232], 00:25:10.083 | 30.00th=[ 2265], 40.00th=[ 2467], 50.00th=[ 2802], 60.00th=[ 3037], 00:25:10.083 | 70.00th=[ 3775], 80.00th=[ 4463], 90.00th=[ 5269], 95.00th=[ 5537], 00:25:10.083 | 99.00th=[ 5671], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:25:10.083 | 99.99th=[ 5805] 00:25:10.083 bw ( KiB/s): min=12288, max=79872, per=0.74%, avg=31675.73, stdev=20448.77, samples=15 00:25:10.083 iops : min= 12, max= 78, avg=30.93, stdev=19.97, samples=15 00:25:10.083 lat (msec) : 250=1.39%, 500=1.67%, 750=1.94%, 1000=0.56%, 2000=6.67% 00:25:10.083 lat (msec) : >=2000=87.78% 00:25:10.083 cpu : usr=0.04%, sys=1.32%, ctx=790, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.9%, >=64=82.5% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:10.083 issued rwts: total=360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job4: (groupid=0, jobs=1): err= 0: pid=424786: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=39, BW=39.2MiB/s (41.2MB/s)(397MiB/10115msec) 00:25:10.083 slat (usec): min=60, max=346528, avg=25217.56, stdev=35532.34 00:25:10.083 clat (msec): min=100, max=3965, avg=2586.79, stdev=940.27 00:25:10.083 lat (msec): min=129, max=3976, avg=2612.01, stdev=938.32 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 190], 5.00th=[ 634], 10.00th=[ 1234], 20.00th=[ 2089], 00:25:10.083 | 30.00th=[ 2265], 40.00th=[ 2333], 50.00th=[ 2433], 60.00th=[ 2869], 00:25:10.083 | 70.00th=[ 3373], 80.00th=[ 3540], 90.00th=[ 3708], 95.00th=[ 3842], 00:25:10.083 | 99.00th=[ 3910], 99.50th=[ 3977], 99.90th=[ 3977], 99.95th=[ 3977], 00:25:10.083 | 99.99th=[ 3977] 00:25:10.083 bw ( KiB/s): min= 6156, max=63488, per=1.00%, avg=42402.31, stdev=19302.72, samples=13 00:25:10.083 iops : min= 6, max= 62, avg=41.38, stdev=18.84, samples=13 00:25:10.083 lat (msec) : 250=1.76%, 500=1.76%, 750=2.52%, 1000=1.76%, 2000=9.57% 00:25:10.083 lat (msec) : >=2000=82.62% 00:25:10.083 cpu : usr=0.00%, sys=1.07%, ctx=792, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.1%, >=64=84.1% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:10.083 issued rwts: total=397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job4: (groupid=0, jobs=1): err= 0: pid=424787: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=50, BW=50.4MiB/s (52.9MB/s)(510MiB/10112msec) 00:25:10.083 slat (usec): min=39, max=172940, avg=19628.99, stdev=29897.37 00:25:10.083 clat (msec): min=98, max=4178, avg=2004.77, stdev=989.95 00:25:10.083 lat (msec): min=111, max=4184, avg=2024.40, stdev=994.95 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 122], 5.00th=[ 296], 10.00th=[ 584], 20.00th=[ 1284], 00:25:10.083 | 30.00th=[ 1502], 40.00th=[ 1720], 50.00th=[ 1955], 60.00th=[ 2265], 00:25:10.083 | 70.00th=[ 2500], 80.00th=[ 2903], 90.00th=[ 3138], 95.00th=[ 3977], 00:25:10.083 | 99.00th=[ 4178], 99.50th=[ 4178], 99.90th=[ 4178], 99.95th=[ 4178], 00:25:10.083 | 99.99th=[ 4178] 00:25:10.083 bw ( KiB/s): min= 2048, max=102400, per=1.42%, avg=60179.69, stdev=30714.75, samples=13 00:25:10.083 iops : min= 2, max= 100, avg=58.77, stdev=29.99, samples=13 00:25:10.083 lat (msec) : 100=0.20%, 250=4.12%, 500=3.53%, 750=5.29%, 1000=4.51% 00:25:10.083 lat (msec) : 2000=33.33%, >=2000=49.02% 00:25:10.083 cpu : usr=0.01%, sys=1.17%, ctx=944, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.083 issued rwts: total=510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job4: (groupid=0, jobs=1): err= 0: pid=424788: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=73, BW=73.6MiB/s (77.2MB/s)(742MiB/10084msec) 00:25:10.083 slat (usec): min=39, max=193509, avg=13478.39, stdev=32568.70 00:25:10.083 clat (msec): min=79, max=2107, avg=1579.45, stdev=432.98 00:25:10.083 lat (msec): min=211, max=2193, avg=1592.93, stdev=434.48 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 222], 5.00th=[ 506], 10.00th=[ 844], 20.00th=[ 1385], 00:25:10.083 | 30.00th=[ 1586], 40.00th=[ 1620], 50.00th=[ 1670], 60.00th=[ 1737], 00:25:10.083 | 70.00th=[ 1787], 80.00th=[ 1888], 90.00th=[ 2022], 95.00th=[ 2072], 00:25:10.083 | 99.00th=[ 2106], 99.50th=[ 2106], 99.90th=[ 2106], 99.95th=[ 2106], 00:25:10.083 | 99.99th=[ 2106] 00:25:10.083 bw ( KiB/s): min=26624, max=98304, per=1.74%, avg=74098.29, stdev=18885.77, samples=17 00:25:10.083 iops : min= 26, max= 96, avg=72.35, stdev=18.44, samples=17 00:25:10.083 lat (msec) : 100=0.13%, 250=2.02%, 500=2.16%, 750=4.18%, 1000=2.02% 00:25:10.083 lat (msec) : 2000=78.57%, >=2000=10.92% 00:25:10.083 cpu : usr=0.03%, sys=1.38%, ctx=707, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.083 issued rwts: total=742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job5: (groupid=0, jobs=1): err= 0: pid=424789: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=79, BW=79.1MiB/s (82.9MB/s)(800MiB/10120msec) 00:25:10.083 slat (usec): min=32, max=190968, avg=12498.21, stdev=36660.06 00:25:10.083 clat (msec): min=118, max=2512, avg=1495.85, stdev=354.34 00:25:10.083 lat (msec): min=120, max=2513, avg=1508.35, stdev=354.38 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 288], 5.00th=[ 961], 10.00th=[ 1200], 20.00th=[ 1267], 00:25:10.083 | 30.00th=[ 1301], 40.00th=[ 1351], 50.00th=[ 1485], 60.00th=[ 1586], 00:25:10.083 | 70.00th=[ 1687], 80.00th=[ 1737], 90.00th=[ 1871], 95.00th=[ 2089], 00:25:10.083 | 99.00th=[ 2366], 99.50th=[ 2500], 99.90th=[ 2500], 99.95th=[ 2500], 00:25:10.083 | 99.99th=[ 2500] 00:25:10.083 bw ( KiB/s): min= 4096, max=124928, per=1.80%, avg=76572.44, stdev=30572.18, samples=18 00:25:10.083 iops : min= 4, max= 122, avg=74.78, stdev=29.86, samples=18 00:25:10.083 lat (msec) : 250=0.38%, 500=1.75%, 750=1.50%, 1000=1.62%, 2000=88.12% 00:25:10.083 lat (msec) : >=2000=6.62% 00:25:10.083 cpu : usr=0.04%, sys=1.32%, ctx=867, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.083 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:10.083 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.083 job5: (groupid=0, jobs=1): err= 0: pid=424790: Tue Nov 19 01:09:14 2024 00:25:10.083 read: IOPS=35, BW=35.6MiB/s (37.3MB/s)(361MiB/10143msec) 00:25:10.083 slat (usec): min=33, max=194625, avg=27717.49, stdev=46031.77 00:25:10.083 clat (msec): min=135, max=4204, avg=3059.53, stdev=1186.35 00:25:10.083 lat (msec): min=189, max=4229, avg=3087.25, stdev=1186.91 00:25:10.083 clat percentiles (msec): 00:25:10.083 | 1.00th=[ 209], 5.00th=[ 558], 10.00th=[ 894], 20.00th=[ 1854], 00:25:10.083 | 30.00th=[ 3306], 40.00th=[ 3473], 50.00th=[ 3507], 60.00th=[ 3675], 00:25:10.083 | 70.00th=[ 3842], 80.00th=[ 4010], 90.00th=[ 4077], 95.00th=[ 4111], 00:25:10.083 | 99.00th=[ 4144], 99.50th=[ 4178], 99.90th=[ 4212], 99.95th=[ 4212], 00:25:10.083 | 99.99th=[ 4212] 00:25:10.083 bw ( KiB/s): min= 8192, max=57344, per=0.81%, avg=34230.86, stdev=14690.11, samples=14 00:25:10.083 iops : min= 8, max= 56, avg=33.43, stdev=14.35, samples=14 00:25:10.083 lat (msec) : 250=1.39%, 500=2.22%, 750=4.99%, 1000=2.77%, 2000=11.36% 00:25:10.083 lat (msec) : >=2000=77.29% 00:25:10.083 cpu : usr=0.02%, sys=1.01%, ctx=797, majf=0, minf=32769 00:25:10.083 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.9%, >=64=82.5% 00:25:10.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:10.084 issued rwts: total=361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 job5: (groupid=0, jobs=1): err= 0: pid=424791: Tue Nov 19 01:09:14 2024 00:25:10.084 read: IOPS=81, BW=81.2MiB/s (85.1MB/s)(825MiB/10160msec) 00:25:10.084 slat (usec): min=50, max=214871, avg=12116.97, stdev=23662.87 00:25:10.084 clat (msec): min=157, max=2157, avg=1463.10, stdev=273.76 00:25:10.084 lat (msec): min=165, max=2159, avg=1475.21, stdev=272.60 00:25:10.084 clat percentiles (msec): 00:25:10.084 | 1.00th=[ 338], 5.00th=[ 1200], 10.00th=[ 1250], 20.00th=[ 1301], 00:25:10.084 | 30.00th=[ 1351], 40.00th=[ 1401], 50.00th=[ 1485], 60.00th=[ 1552], 00:25:10.084 | 70.00th=[ 1586], 80.00th=[ 1620], 90.00th=[ 1687], 95.00th=[ 1888], 00:25:10.084 | 99.00th=[ 2106], 99.50th=[ 2140], 99.90th=[ 2165], 99.95th=[ 2165], 00:25:10.084 | 99.99th=[ 2165] 00:25:10.084 bw ( KiB/s): min=14336, max=112640, per=1.87%, avg=79409.17, stdev=26460.23, samples=18 00:25:10.084 iops : min= 14, max= 110, avg=77.50, stdev=25.86, samples=18 00:25:10.084 lat (msec) : 250=0.73%, 500=0.85%, 750=1.33%, 1000=1.09%, 2000=92.24% 00:25:10.084 lat (msec) : >=2000=3.76% 00:25:10.084 cpu : usr=0.03%, sys=1.75%, ctx=904, majf=0, minf=32769 00:25:10.084 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:25:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:10.084 issued rwts: total=825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 job5: (groupid=0, jobs=1): err= 0: pid=424792: Tue Nov 19 01:09:14 2024 00:25:10.084 read: IOPS=59, BW=60.0MiB/s (62.9MB/s)(605MiB/10091msec) 00:25:10.084 slat (usec): min=37, max=216814, avg=16527.45, stdev=46283.66 00:25:10.084 clat (msec): min=89, max=3013, avg=1863.62, stdev=552.22 00:25:10.084 lat (msec): min=93, max=3014, avg=1880.15, stdev=552.22 00:25:10.084 clat percentiles (msec): 00:25:10.084 | 1.00th=[ 194], 5.00th=[ 592], 10.00th=[ 1519], 20.00th=[ 1636], 00:25:10.084 | 30.00th=[ 1653], 40.00th=[ 1670], 50.00th=[ 1821], 60.00th=[ 1871], 00:25:10.084 | 70.00th=[ 2123], 80.00th=[ 2333], 90.00th=[ 2601], 95.00th=[ 2802], 00:25:10.084 | 99.00th=[ 2937], 99.50th=[ 3004], 99.90th=[ 3004], 99.95th=[ 3004], 00:25:10.084 | 99.99th=[ 3004] 00:25:10.084 bw ( KiB/s): min=12288, max=96256, per=1.44%, avg=61177.81, stdev=25950.24, samples=16 00:25:10.084 iops : min= 12, max= 94, avg=59.69, stdev=25.37, samples=16 00:25:10.084 lat (msec) : 100=0.33%, 250=1.16%, 500=2.31%, 750=2.64%, 1000=1.16% 00:25:10.084 lat (msec) : 2000=54.71%, >=2000=37.69% 00:25:10.084 cpu : usr=0.04%, sys=1.08%, ctx=756, majf=0, minf=32769 00:25:10.084 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:25:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.084 issued rwts: total=605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 job5: (groupid=0, jobs=1): err= 0: pid=424793: Tue Nov 19 01:09:14 2024 00:25:10.084 read: IOPS=48, BW=48.7MiB/s (51.1MB/s)(494MiB/10143msec) 00:25:10.084 slat (usec): min=42, max=339709, avg=20342.95, stdev=36016.61 00:25:10.084 clat (msec): min=90, max=3530, avg=2329.75, stdev=850.76 00:25:10.084 lat (msec): min=152, max=3618, avg=2350.09, stdev=852.20 00:25:10.084 clat percentiles (msec): 00:25:10.084 | 1.00th=[ 192], 5.00th=[ 558], 10.00th=[ 961], 20.00th=[ 1737], 00:25:10.084 | 30.00th=[ 2089], 40.00th=[ 2198], 50.00th=[ 2534], 60.00th=[ 2735], 00:25:10.084 | 70.00th=[ 2836], 80.00th=[ 3171], 90.00th=[ 3272], 95.00th=[ 3373], 00:25:10.084 | 99.00th=[ 3440], 99.50th=[ 3473], 99.90th=[ 3540], 99.95th=[ 3540], 00:25:10.084 | 99.99th=[ 3540] 00:25:10.084 bw ( KiB/s): min=14336, max=81920, per=1.18%, avg=49966.80, stdev=17737.40, samples=15 00:25:10.084 iops : min= 14, max= 80, avg=48.73, stdev=17.39, samples=15 00:25:10.084 lat (msec) : 100=0.20%, 250=1.42%, 500=3.04%, 750=3.64%, 1000=2.83% 00:25:10.084 lat (msec) : 2000=11.54%, >=2000=77.33% 00:25:10.084 cpu : usr=0.04%, sys=1.27%, ctx=891, majf=0, minf=32769 00:25:10.084 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.2% 00:25:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.084 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 job5: (groupid=0, jobs=1): err= 0: pid=424794: Tue Nov 19 01:09:14 2024 00:25:10.084 read: IOPS=55, BW=55.1MiB/s (57.8MB/s)(555MiB/10075msec) 00:25:10.084 slat (usec): min=36, max=246094, avg=18078.35, stdev=37363.11 00:25:10.084 clat (msec): min=39, max=4439, avg=1713.91, stdev=866.09 00:25:10.084 lat (msec): min=81, max=4452, avg=1731.98, stdev=875.13 00:25:10.084 clat percentiles (msec): 00:25:10.084 | 1.00th=[ 117], 5.00th=[ 245], 10.00th=[ 514], 20.00th=[ 1083], 00:25:10.084 | 30.00th=[ 1485], 40.00th=[ 1569], 50.00th=[ 1720], 60.00th=[ 1871], 00:25:10.084 | 70.00th=[ 1955], 80.00th=[ 2039], 90.00th=[ 2802], 95.00th=[ 3641], 00:25:10.084 | 99.00th=[ 4279], 99.50th=[ 4396], 99.90th=[ 4463], 99.95th=[ 4463], 00:25:10.084 | 99.99th=[ 4463] 00:25:10.084 bw ( KiB/s): min=16384, max=126976, per=1.71%, avg=72874.67, stdev=29684.23, samples=12 00:25:10.084 iops : min= 16, max= 124, avg=71.17, stdev=28.99, samples=12 00:25:10.084 lat (msec) : 50=0.18%, 100=0.36%, 250=5.05%, 500=4.32%, 750=4.32% 00:25:10.084 lat (msec) : 1000=5.23%, 2000=56.76%, >=2000=23.78% 00:25:10.084 cpu : usr=0.00%, sys=1.29%, ctx=950, majf=0, minf=32769 00:25:10.084 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.6% 00:25:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.084 issued rwts: total=555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 job5: (groupid=0, jobs=1): err= 0: pid=424795: Tue Nov 19 01:09:14 2024 00:25:10.084 read: IOPS=36, BW=37.0MiB/s (38.8MB/s)(372MiB/10057msec) 00:25:10.084 slat (usec): min=39, max=222858, avg=26950.77, stdev=47837.64 00:25:10.084 clat (msec): min=29, max=4139, avg=2663.98, stdev=1250.39 00:25:10.084 lat (msec): min=74, max=4172, avg=2690.93, stdev=1256.48 00:25:10.084 clat percentiles (msec): 00:25:10.084 | 1.00th=[ 106], 5.00th=[ 228], 10.00th=[ 609], 20.00th=[ 1045], 00:25:10.084 | 30.00th=[ 2198], 40.00th=[ 2735], 50.00th=[ 3306], 60.00th=[ 3540], 00:25:10.084 | 70.00th=[ 3641], 80.00th=[ 3675], 90.00th=[ 3775], 95.00th=[ 3842], 00:25:10.084 | 99.00th=[ 3943], 99.50th=[ 4077], 99.90th=[ 4144], 99.95th=[ 4144], 00:25:10.084 | 99.99th=[ 4144] 00:25:10.084 bw ( KiB/s): min=20480, max=83968, per=0.98%, avg=41642.67, stdev=19099.15, samples=12 00:25:10.084 iops : min= 20, max= 82, avg=40.67, stdev=18.65, samples=12 00:25:10.084 lat (msec) : 50=0.27%, 100=0.27%, 250=4.57%, 500=4.30%, 750=1.88% 00:25:10.084 lat (msec) : 1000=4.03%, 2000=13.71%, >=2000=70.97% 00:25:10.084 cpu : usr=0.00%, sys=0.89%, ctx=819, majf=0, minf=32769 00:25:10.084 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.1% 00:25:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:25:10.084 issued rwts: total=372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 job5: (groupid=0, jobs=1): err= 0: pid=424796: Tue Nov 19 01:09:14 2024 00:25:10.084 read: IOPS=62, BW=62.1MiB/s (65.1MB/s)(627MiB/10097msec) 00:25:10.084 slat (usec): min=28, max=238416, avg=16053.08, stdev=43764.24 00:25:10.084 clat (msec): min=29, max=2503, avg=1797.44, stdev=493.63 00:25:10.084 lat (msec): min=107, max=2504, avg=1813.50, stdev=493.19 00:25:10.084 clat percentiles (msec): 00:25:10.084 | 1.00th=[ 321], 5.00th=[ 827], 10.00th=[ 1183], 20.00th=[ 1552], 00:25:10.084 | 30.00th=[ 1720], 40.00th=[ 1804], 50.00th=[ 1871], 60.00th=[ 1989], 00:25:10.084 | 70.00th=[ 2106], 80.00th=[ 2198], 90.00th=[ 2299], 95.00th=[ 2400], 00:25:10.084 | 99.00th=[ 2500], 99.50th=[ 2500], 99.90th=[ 2500], 99.95th=[ 2500], 00:25:10.084 | 99.99th=[ 2500] 00:25:10.084 bw ( KiB/s): min=38912, max=96256, per=1.50%, avg=63872.00, stdev=21689.71, samples=16 00:25:10.084 iops : min= 38, max= 94, avg=62.38, stdev=21.18, samples=16 00:25:10.084 lat (msec) : 50=0.16%, 250=0.64%, 500=3.19%, 750=0.96%, 1000=2.55% 00:25:10.084 lat (msec) : 2000=55.02%, >=2000=37.48% 00:25:10.084 cpu : usr=0.00%, sys=1.20%, ctx=689, majf=0, minf=32769 00:25:10.084 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=90.0% 00:25:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.084 issued rwts: total=627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 job5: (groupid=0, jobs=1): err= 0: pid=424797: Tue Nov 19 01:09:14 2024 00:25:10.084 read: IOPS=68, BW=68.8MiB/s (72.2MB/s)(693MiB/10067msec) 00:25:10.084 slat (usec): min=29, max=204872, avg=14454.39, stdev=40405.78 00:25:10.084 clat (msec): min=47, max=2375, avg=1667.67, stdev=428.54 00:25:10.084 lat (msec): min=83, max=2375, avg=1682.13, stdev=428.30 00:25:10.084 clat percentiles (msec): 00:25:10.084 | 1.00th=[ 121], 5.00th=[ 701], 10.00th=[ 1217], 20.00th=[ 1502], 00:25:10.084 | 30.00th=[ 1536], 40.00th=[ 1670], 50.00th=[ 1720], 60.00th=[ 1787], 00:25:10.084 | 70.00th=[ 1888], 80.00th=[ 1955], 90.00th=[ 2140], 95.00th=[ 2265], 00:25:10.084 | 99.00th=[ 2366], 99.50th=[ 2366], 99.90th=[ 2366], 99.95th=[ 2366], 00:25:10.084 | 99.99th=[ 2366] 00:25:10.084 bw ( KiB/s): min=24576, max=96256, per=1.70%, avg=72320.00, stdev=20841.61, samples=16 00:25:10.084 iops : min= 24, max= 94, avg=70.63, stdev=20.35, samples=16 00:25:10.084 lat (msec) : 50=0.14%, 100=0.14%, 250=2.16%, 500=0.58%, 750=2.31% 00:25:10.084 lat (msec) : 1000=2.45%, 2000=75.32%, >=2000=16.88% 00:25:10.084 cpu : usr=0.06%, sys=1.24%, ctx=709, majf=0, minf=32769 00:25:10.084 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:25:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.084 issued rwts: total=693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 job5: (groupid=0, jobs=1): err= 0: pid=424798: Tue Nov 19 01:09:14 2024 00:25:10.084 read: IOPS=57, BW=57.2MiB/s (59.9MB/s)(580MiB/10146msec) 00:25:10.084 slat (usec): min=28, max=356596, avg=17256.72, stdev=40667.95 00:25:10.084 clat (msec): min=134, max=3617, avg=1943.77, stdev=778.83 00:25:10.084 lat (msec): min=188, max=3618, avg=1961.03, stdev=782.24 00:25:10.084 clat percentiles (msec): 00:25:10.084 | 1.00th=[ 247], 5.00th=[ 460], 10.00th=[ 776], 20.00th=[ 1418], 00:25:10.084 | 30.00th=[ 1670], 40.00th=[ 1770], 50.00th=[ 1854], 60.00th=[ 2089], 00:25:10.084 | 70.00th=[ 2299], 80.00th=[ 2668], 90.00th=[ 3004], 95.00th=[ 3171], 00:25:10.084 | 99.00th=[ 3473], 99.50th=[ 3507], 99.90th=[ 3608], 99.95th=[ 3608], 00:25:10.084 | 99.99th=[ 3608] 00:25:10.084 bw ( KiB/s): min=24576, max=126976, per=1.36%, avg=57971.19, stdev=29880.71, samples=16 00:25:10.084 iops : min= 24, max= 124, avg=56.56, stdev=29.10, samples=16 00:25:10.084 lat (msec) : 250=1.21%, 500=4.83%, 750=3.62%, 1000=4.48%, 2000=42.07% 00:25:10.084 lat (msec) : >=2000=43.79% 00:25:10.084 cpu : usr=0.00%, sys=1.08%, ctx=780, majf=0, minf=32769 00:25:10.084 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:25:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.084 issued rwts: total=580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 job5: (groupid=0, jobs=1): err= 0: pid=424799: Tue Nov 19 01:09:14 2024 00:25:10.084 read: IOPS=60, BW=60.2MiB/s (63.1MB/s)(610MiB/10131msec) 00:25:10.084 slat (usec): min=46, max=230524, avg=16491.44, stdev=42515.82 00:25:10.084 clat (msec): min=68, max=2732, avg=1828.94, stdev=611.39 00:25:10.084 lat (msec): min=175, max=2733, avg=1845.43, stdev=613.11 00:25:10.084 clat percentiles (msec): 00:25:10.084 | 1.00th=[ 186], 5.00th=[ 531], 10.00th=[ 919], 20.00th=[ 1401], 00:25:10.084 | 30.00th=[ 1519], 40.00th=[ 1838], 50.00th=[ 2005], 60.00th=[ 2165], 00:25:10.084 | 70.00th=[ 2198], 80.00th=[ 2333], 90.00th=[ 2500], 95.00th=[ 2635], 00:25:10.084 | 99.00th=[ 2668], 99.50th=[ 2702], 99.90th=[ 2735], 99.95th=[ 2735], 00:25:10.084 | 99.99th=[ 2735] 00:25:10.084 bw ( KiB/s): min=16384, max=94208, per=1.45%, avg=61696.00, stdev=23858.69, samples=16 00:25:10.084 iops : min= 16, max= 92, avg=60.25, stdev=23.30, samples=16 00:25:10.084 lat (msec) : 100=0.16%, 250=2.62%, 500=1.97%, 750=3.61%, 1000=2.30% 00:25:10.084 lat (msec) : 2000=39.18%, >=2000=50.16% 00:25:10.084 cpu : usr=0.00%, sys=1.08%, ctx=712, majf=0, minf=32769 00:25:10.084 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:25:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:25:10.084 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 job5: (groupid=0, jobs=1): err= 0: pid=424800: Tue Nov 19 01:09:14 2024 00:25:10.084 read: IOPS=40, BW=41.0MiB/s (43.0MB/s)(414MiB/10104msec) 00:25:10.084 slat (usec): min=35, max=241298, avg=24178.77, stdev=32236.92 00:25:10.084 clat (msec): min=91, max=4483, avg=2388.62, stdev=910.23 00:25:10.084 lat (msec): min=109, max=4508, avg=2412.80, stdev=914.39 00:25:10.084 clat percentiles (msec): 00:25:10.084 | 1.00th=[ 197], 5.00th=[ 592], 10.00th=[ 1003], 20.00th=[ 1703], 00:25:10.084 | 30.00th=[ 2299], 40.00th=[ 2400], 50.00th=[ 2467], 60.00th=[ 2567], 00:25:10.084 | 70.00th=[ 2668], 80.00th=[ 2836], 90.00th=[ 3742], 95.00th=[ 4010], 00:25:10.084 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:25:10.084 | 99.99th=[ 4463] 00:25:10.084 bw ( KiB/s): min=38912, max=79872, per=1.25%, avg=53248.00, stdev=12491.12, samples=11 00:25:10.084 iops : min= 38, max= 78, avg=52.00, stdev=12.20, samples=11 00:25:10.084 lat (msec) : 100=0.24%, 250=1.45%, 500=2.42%, 750=3.14%, 1000=2.66% 00:25:10.084 lat (msec) : 2000=15.46%, >=2000=74.64% 00:25:10.084 cpu : usr=0.03%, sys=1.42%, ctx=877, majf=0, minf=32769 00:25:10.084 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.8% 00:25:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:25:10.084 issued rwts: total=414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 job5: (groupid=0, jobs=1): err= 0: pid=424801: Tue Nov 19 01:09:14 2024 00:25:10.084 read: IOPS=80, BW=80.6MiB/s (84.5MB/s)(817MiB/10136msec) 00:25:10.084 slat (usec): min=29, max=320638, avg=12257.49, stdev=36838.02 00:25:10.084 clat (msec): min=118, max=2047, avg=1465.29, stdev=267.31 00:25:10.084 lat (msec): min=173, max=2048, avg=1477.55, stdev=265.10 00:25:10.084 clat percentiles (msec): 00:25:10.084 | 1.00th=[ 288], 5.00th=[ 1116], 10.00th=[ 1200], 20.00th=[ 1334], 00:25:10.084 | 30.00th=[ 1385], 40.00th=[ 1452], 50.00th=[ 1485], 60.00th=[ 1519], 00:25:10.084 | 70.00th=[ 1569], 80.00th=[ 1636], 90.00th=[ 1737], 95.00th=[ 1804], 00:25:10.084 | 99.00th=[ 1955], 99.50th=[ 2039], 99.90th=[ 2056], 99.95th=[ 2056], 00:25:10.084 | 99.99th=[ 2056] 00:25:10.084 bw ( KiB/s): min=36864, max=122880, per=1.95%, avg=83004.24, stdev=25385.26, samples=17 00:25:10.084 iops : min= 36, max= 120, avg=81.06, stdev=24.79, samples=17 00:25:10.084 lat (msec) : 250=0.37%, 500=1.71%, 750=0.86%, 1000=1.71%, 2000=94.49% 00:25:10.084 lat (msec) : >=2000=0.86% 00:25:10.084 cpu : usr=0.03%, sys=1.23%, ctx=771, majf=0, minf=32769 00:25:10.084 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:25:10.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.084 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:10.084 issued rwts: total=817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:10.084 00:25:10.084 Run status group 0 (all jobs): 00:25:10.084 READ: bw=4152MiB/s (4354MB/s), 25.3MiB/s-94.3MiB/s (26.6MB/s-98.8MB/s), io=41.3GiB (44.4GB), run=10055-10194msec 00:25:10.084 00:25:10.084 Disk stats (read/write): 00:25:10.084 nvme0n1: ios=53865/0, merge=0/0, ticks=8800996/0, in_queue=8800996, util=98.48% 00:25:10.084 nvme2n1: ios=53346/0, merge=0/0, ticks=8967279/0, in_queue=8967279, util=98.59% 00:25:10.084 nvme3n1: ios=56991/0, merge=0/0, ticks=9396342/0, in_queue=9396342, util=98.62% 00:25:10.084 nvme4n1: ios=55972/0, merge=0/0, ticks=9561773/0, in_queue=9561773, util=98.88% 00:25:10.085 nvme5n1: ios=52166/0, merge=0/0, ticks=8725578/0, in_queue=8725578, util=99.01% 00:25:10.085 nvme6n1: ios=60899/0, merge=0/0, ticks=10870992/0, in_queue=10870992, util=98.78% 00:25:10.085 01:09:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:25:10.085 01:09:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:25:10.085 01:09:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:10.085 01:09:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:25:10.085 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:10.085 01:09:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:10.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:10.651 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:25:10.651 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:10.652 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:10.652 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:25:10.652 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:10.652 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:25:10.652 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:10.652 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.652 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.652 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:10.652 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.652 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:10.652 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:11.587 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:11.587 01:09:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:12.154 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:12.154 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:25:12.154 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:12.154 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:12.154 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:25:12.413 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:25:12.413 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:12.413 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:12.413 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:12.413 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.413 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:12.413 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.413 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:12.413 01:09:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:13.346 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:25:13.346 01:09:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:14.280 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:14.281 rmmod nvme_rdma 00:25:14.281 rmmod nvme_fabrics 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 423736 ']' 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 423736 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 423736 ']' 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 423736 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 423736 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 423736' 00:25:14.281 killing process with pid 423736 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 423736 00:25:14.281 01:09:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 423736 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:16.816 00:25:16.816 real 0m28.477s 00:25:16.816 user 1m35.007s 00:25:16.816 sys 0m16.282s 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:16.816 ************************************ 00:25:16.816 END TEST nvmf_srq_overwhelm 00:25:16.816 ************************************ 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:16.816 ************************************ 00:25:16.816 START TEST nvmf_shutdown 00:25:16.816 ************************************ 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:25:16.816 * Looking for test storage... 00:25:16.816 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:16.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.816 --rc genhtml_branch_coverage=1 00:25:16.816 --rc genhtml_function_coverage=1 00:25:16.816 --rc genhtml_legend=1 00:25:16.816 --rc geninfo_all_blocks=1 00:25:16.816 --rc geninfo_unexecuted_blocks=1 00:25:16.816 00:25:16.816 ' 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:16.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.816 --rc genhtml_branch_coverage=1 00:25:16.816 --rc genhtml_function_coverage=1 00:25:16.816 --rc genhtml_legend=1 00:25:16.816 --rc geninfo_all_blocks=1 00:25:16.816 --rc geninfo_unexecuted_blocks=1 00:25:16.816 00:25:16.816 ' 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:16.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.816 --rc genhtml_branch_coverage=1 00:25:16.816 --rc genhtml_function_coverage=1 00:25:16.816 --rc genhtml_legend=1 00:25:16.816 --rc geninfo_all_blocks=1 00:25:16.816 --rc geninfo_unexecuted_blocks=1 00:25:16.816 00:25:16.816 ' 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:16.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.816 --rc genhtml_branch_coverage=1 00:25:16.816 --rc genhtml_function_coverage=1 00:25:16.816 --rc genhtml_legend=1 00:25:16.816 --rc geninfo_all_blocks=1 00:25:16.816 --rc geninfo_unexecuted_blocks=1 00:25:16.816 00:25:16.816 ' 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.816 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:16.817 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:16.817 ************************************ 00:25:16.817 START TEST nvmf_shutdown_tc1 00:25:16.817 ************************************ 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:16.817 01:09:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:25:23.385 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:23.386 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:23.386 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@405 -- # modinfo irdma 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:23.386 Found net devices under 0000:af:00.0: cvl_0_0 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:23.386 Found net devices under 0000:af:00.1: cvl_0_1 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:25:23.386 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:25:23.387 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:25:23.387 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:25:23.387 altname enp175s0f0np0 00:25:23.387 altname ens801f0np0 00:25:23.387 inet 192.168.100.8/24 scope global cvl_0_0 00:25:23.387 valid_lft forever preferred_lft forever 00:25:23.387 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:25:23.387 valid_lft forever preferred_lft forever 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:25:23.387 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:25:23.387 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:25:23.387 altname enp175s0f1np1 00:25:23.387 altname ens801f1np1 00:25:23.387 inet 192.168.100.9/24 scope global cvl_0_1 00:25:23.387 valid_lft forever preferred_lft forever 00:25:23.387 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:25:23.387 valid_lft forever preferred_lft forever 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:23.387 192.168.100.9' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:23.387 192.168.100.9' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:23.387 192.168.100.9' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:23.387 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:23.388 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=430690 00:25:23.388 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 430690 00:25:23.388 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:23.388 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 430690 ']' 00:25:23.388 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.388 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.388 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.388 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.388 01:09:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:23.388 [2024-11-19 01:09:29.361335] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:23.388 [2024-11-19 01:09:29.361420] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.388 [2024-11-19 01:09:29.472031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:23.388 [2024-11-19 01:09:29.583097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.388 [2024-11-19 01:09:29.583141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.388 [2024-11-19 01:09:29.583153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.388 [2024-11-19 01:09:29.583164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.388 [2024-11-19 01:09:29.583172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.388 [2024-11-19 01:09:29.585904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.388 [2024-11-19 01:09:29.585931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.388 [2024-11-19 01:09:29.586034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.388 [2024-11-19 01:09:29.586056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:23.646 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:23.646 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:23.646 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:23.646 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:23.646 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:23.646 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.646 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:23.646 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.646 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:23.646 [2024-11-19 01:09:30.231337] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000029440/0x617000007c40) succeed. 00:25:23.646 [2024-11-19 01:09:30.240909] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x6120000295c0/0x617000007fc0) succeed. 00:25:23.646 [2024-11-19 01:09:30.240937] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:25:23.646 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.646 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.647 01:09:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:23.905 Malloc1 00:25:23.905 [2024-11-19 01:09:30.412613] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:23.905 Malloc2 00:25:23.905 Malloc3 00:25:24.210 Malloc4 00:25:24.210 Malloc5 00:25:24.210 Malloc6 00:25:24.468 Malloc7 00:25:24.468 Malloc8 00:25:24.727 Malloc9 00:25:24.727 Malloc10 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=431154 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 431154 /var/tmp/bdevperf.sock 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 431154 ']' 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:24.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.727 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.728 { 00:25:24.728 "params": { 00:25:24.728 "name": "Nvme$subsystem", 00:25:24.728 "trtype": "$TEST_TRANSPORT", 00:25:24.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.728 "adrfam": "ipv4", 00:25:24.728 "trsvcid": "$NVMF_PORT", 00:25:24.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.728 "hdgst": ${hdgst:-false}, 00:25:24.728 "ddgst": ${ddgst:-false} 00:25:24.728 }, 00:25:24.728 "method": "bdev_nvme_attach_controller" 00:25:24.728 } 00:25:24.728 EOF 00:25:24.728 )") 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.728 { 00:25:24.728 "params": { 00:25:24.728 "name": "Nvme$subsystem", 00:25:24.728 "trtype": "$TEST_TRANSPORT", 00:25:24.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.728 "adrfam": "ipv4", 00:25:24.728 "trsvcid": "$NVMF_PORT", 00:25:24.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.728 "hdgst": ${hdgst:-false}, 00:25:24.728 "ddgst": ${ddgst:-false} 00:25:24.728 }, 00:25:24.728 "method": "bdev_nvme_attach_controller" 00:25:24.728 } 00:25:24.728 EOF 00:25:24.728 )") 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.728 { 00:25:24.728 "params": { 00:25:24.728 "name": "Nvme$subsystem", 00:25:24.728 "trtype": "$TEST_TRANSPORT", 00:25:24.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.728 "adrfam": "ipv4", 00:25:24.728 "trsvcid": "$NVMF_PORT", 00:25:24.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.728 "hdgst": ${hdgst:-false}, 00:25:24.728 "ddgst": ${ddgst:-false} 00:25:24.728 }, 00:25:24.728 "method": "bdev_nvme_attach_controller" 00:25:24.728 } 00:25:24.728 EOF 00:25:24.728 )") 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.728 { 00:25:24.728 "params": { 00:25:24.728 "name": "Nvme$subsystem", 00:25:24.728 "trtype": "$TEST_TRANSPORT", 00:25:24.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.728 "adrfam": "ipv4", 00:25:24.728 "trsvcid": "$NVMF_PORT", 00:25:24.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.728 "hdgst": ${hdgst:-false}, 00:25:24.728 "ddgst": ${ddgst:-false} 00:25:24.728 }, 00:25:24.728 "method": "bdev_nvme_attach_controller" 00:25:24.728 } 00:25:24.728 EOF 00:25:24.728 )") 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.728 { 00:25:24.728 "params": { 00:25:24.728 "name": "Nvme$subsystem", 00:25:24.728 "trtype": "$TEST_TRANSPORT", 00:25:24.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.728 "adrfam": "ipv4", 00:25:24.728 "trsvcid": "$NVMF_PORT", 00:25:24.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.728 "hdgst": ${hdgst:-false}, 00:25:24.728 "ddgst": ${ddgst:-false} 00:25:24.728 }, 00:25:24.728 "method": "bdev_nvme_attach_controller" 00:25:24.728 } 00:25:24.728 EOF 00:25:24.728 )") 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.728 { 00:25:24.728 "params": { 00:25:24.728 "name": "Nvme$subsystem", 00:25:24.728 "trtype": "$TEST_TRANSPORT", 00:25:24.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.728 "adrfam": "ipv4", 00:25:24.728 "trsvcid": "$NVMF_PORT", 00:25:24.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.728 "hdgst": ${hdgst:-false}, 00:25:24.728 "ddgst": ${ddgst:-false} 00:25:24.728 }, 00:25:24.728 "method": "bdev_nvme_attach_controller" 00:25:24.728 } 00:25:24.728 EOF 00:25:24.728 )") 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.728 { 00:25:24.728 "params": { 00:25:24.728 "name": "Nvme$subsystem", 00:25:24.728 "trtype": "$TEST_TRANSPORT", 00:25:24.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.728 "adrfam": "ipv4", 00:25:24.728 "trsvcid": "$NVMF_PORT", 00:25:24.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.728 "hdgst": ${hdgst:-false}, 00:25:24.728 "ddgst": ${ddgst:-false} 00:25:24.728 }, 00:25:24.728 "method": "bdev_nvme_attach_controller" 00:25:24.728 } 00:25:24.728 EOF 00:25:24.728 )") 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.728 { 00:25:24.728 "params": { 00:25:24.728 "name": "Nvme$subsystem", 00:25:24.728 "trtype": "$TEST_TRANSPORT", 00:25:24.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.728 "adrfam": "ipv4", 00:25:24.728 "trsvcid": "$NVMF_PORT", 00:25:24.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.728 "hdgst": ${hdgst:-false}, 00:25:24.728 "ddgst": ${ddgst:-false} 00:25:24.728 }, 00:25:24.728 "method": "bdev_nvme_attach_controller" 00:25:24.728 } 00:25:24.728 EOF 00:25:24.728 )") 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.728 { 00:25:24.728 "params": { 00:25:24.728 "name": "Nvme$subsystem", 00:25:24.728 "trtype": "$TEST_TRANSPORT", 00:25:24.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.728 "adrfam": "ipv4", 00:25:24.728 "trsvcid": "$NVMF_PORT", 00:25:24.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.728 "hdgst": ${hdgst:-false}, 00:25:24.728 "ddgst": ${ddgst:-false} 00:25:24.728 }, 00:25:24.728 "method": "bdev_nvme_attach_controller" 00:25:24.728 } 00:25:24.728 EOF 00:25:24.728 )") 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.728 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.728 { 00:25:24.728 "params": { 00:25:24.728 "name": "Nvme$subsystem", 00:25:24.728 "trtype": "$TEST_TRANSPORT", 00:25:24.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.728 "adrfam": "ipv4", 00:25:24.728 "trsvcid": "$NVMF_PORT", 00:25:24.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.728 "hdgst": ${hdgst:-false}, 00:25:24.728 "ddgst": ${ddgst:-false} 00:25:24.728 }, 00:25:24.728 "method": "bdev_nvme_attach_controller" 00:25:24.729 } 00:25:24.729 EOF 00:25:24.729 )") 00:25:24.729 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:24.729 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:24.729 [2024-11-19 01:09:31.416637] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:24.729 [2024-11-19 01:09:31.416725] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:24.729 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:24.729 01:09:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:24.729 "params": { 00:25:24.729 "name": "Nvme1", 00:25:24.729 "trtype": "rdma", 00:25:24.729 "traddr": "192.168.100.8", 00:25:24.729 "adrfam": "ipv4", 00:25:24.729 "trsvcid": "4420", 00:25:24.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.729 "hdgst": false, 00:25:24.729 "ddgst": false 00:25:24.729 }, 00:25:24.729 "method": "bdev_nvme_attach_controller" 00:25:24.729 },{ 00:25:24.729 "params": { 00:25:24.729 "name": "Nvme2", 00:25:24.729 "trtype": "rdma", 00:25:24.729 "traddr": "192.168.100.8", 00:25:24.729 "adrfam": "ipv4", 00:25:24.729 "trsvcid": "4420", 00:25:24.729 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:24.729 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:24.729 "hdgst": false, 00:25:24.729 "ddgst": false 00:25:24.729 }, 00:25:24.729 "method": "bdev_nvme_attach_controller" 00:25:24.729 },{ 00:25:24.729 "params": { 00:25:24.729 "name": "Nvme3", 00:25:24.729 "trtype": "rdma", 00:25:24.729 "traddr": "192.168.100.8", 00:25:24.729 "adrfam": "ipv4", 00:25:24.729 "trsvcid": "4420", 00:25:24.729 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:24.729 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:24.729 "hdgst": false, 00:25:24.729 "ddgst": false 00:25:24.729 }, 00:25:24.729 "method": "bdev_nvme_attach_controller" 00:25:24.729 },{ 00:25:24.729 "params": { 00:25:24.729 "name": "Nvme4", 00:25:24.729 "trtype": "rdma", 00:25:24.729 "traddr": "192.168.100.8", 00:25:24.729 "adrfam": "ipv4", 00:25:24.729 "trsvcid": "4420", 00:25:24.729 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:24.729 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:24.729 "hdgst": false, 00:25:24.729 "ddgst": false 00:25:24.729 }, 00:25:24.729 "method": "bdev_nvme_attach_controller" 00:25:24.729 },{ 00:25:24.729 "params": { 00:25:24.729 "name": "Nvme5", 00:25:24.729 "trtype": "rdma", 00:25:24.729 "traddr": "192.168.100.8", 00:25:24.729 "adrfam": "ipv4", 00:25:24.729 "trsvcid": "4420", 00:25:24.729 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:24.729 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:24.729 "hdgst": false, 00:25:24.729 "ddgst": false 00:25:24.729 }, 00:25:24.729 "method": "bdev_nvme_attach_controller" 00:25:24.729 },{ 00:25:24.729 "params": { 00:25:24.729 "name": "Nvme6", 00:25:24.729 "trtype": "rdma", 00:25:24.729 "traddr": "192.168.100.8", 00:25:24.729 "adrfam": "ipv4", 00:25:24.729 "trsvcid": "4420", 00:25:24.729 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:24.729 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:24.729 "hdgst": false, 00:25:24.729 "ddgst": false 00:25:24.729 }, 00:25:24.729 "method": "bdev_nvme_attach_controller" 00:25:24.729 },{ 00:25:24.729 "params": { 00:25:24.729 "name": "Nvme7", 00:25:24.729 "trtype": "rdma", 00:25:24.729 "traddr": "192.168.100.8", 00:25:24.729 "adrfam": "ipv4", 00:25:24.729 "trsvcid": "4420", 00:25:24.729 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:24.729 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:24.729 "hdgst": false, 00:25:24.729 "ddgst": false 00:25:24.729 }, 00:25:24.729 "method": "bdev_nvme_attach_controller" 00:25:24.729 },{ 00:25:24.729 "params": { 00:25:24.729 "name": "Nvme8", 00:25:24.729 "trtype": "rdma", 00:25:24.729 "traddr": "192.168.100.8", 00:25:24.729 "adrfam": "ipv4", 00:25:24.729 "trsvcid": "4420", 00:25:24.729 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:24.729 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:24.729 "hdgst": false, 00:25:24.729 "ddgst": false 00:25:24.729 }, 00:25:24.729 "method": "bdev_nvme_attach_controller" 00:25:24.729 },{ 00:25:24.729 "params": { 00:25:24.729 "name": "Nvme9", 00:25:24.729 "trtype": "rdma", 00:25:24.729 "traddr": "192.168.100.8", 00:25:24.729 "adrfam": "ipv4", 00:25:24.729 "trsvcid": "4420", 00:25:24.729 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:24.729 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:24.729 "hdgst": false, 00:25:24.729 "ddgst": false 00:25:24.729 }, 00:25:24.729 "method": "bdev_nvme_attach_controller" 00:25:24.729 },{ 00:25:24.729 "params": { 00:25:24.729 "name": "Nvme10", 00:25:24.729 "trtype": "rdma", 00:25:24.729 "traddr": "192.168.100.8", 00:25:24.729 "adrfam": "ipv4", 00:25:24.729 "trsvcid": "4420", 00:25:24.729 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:24.729 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:24.729 "hdgst": false, 00:25:24.729 "ddgst": false 00:25:24.729 }, 00:25:24.729 "method": "bdev_nvme_attach_controller" 00:25:24.729 }' 00:25:24.987 [2024-11-19 01:09:31.543682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.987 [2024-11-19 01:09:31.658998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.362 01:09:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.362 01:09:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:26.362 01:09:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:26.362 01:09:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.362 01:09:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:26.362 01:09:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.362 01:09:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 431154 00:25:26.362 01:09:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:25:26.362 01:09:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:27.297 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 431154 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 430690 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.297 { 00:25:27.297 "params": { 00:25:27.297 "name": "Nvme$subsystem", 00:25:27.297 "trtype": "$TEST_TRANSPORT", 00:25:27.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.297 "adrfam": "ipv4", 00:25:27.297 "trsvcid": "$NVMF_PORT", 00:25:27.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.297 "hdgst": ${hdgst:-false}, 00:25:27.297 "ddgst": ${ddgst:-false} 00:25:27.297 }, 00:25:27.297 "method": "bdev_nvme_attach_controller" 00:25:27.297 } 00:25:27.297 EOF 00:25:27.297 )") 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.297 { 00:25:27.297 "params": { 00:25:27.297 "name": "Nvme$subsystem", 00:25:27.297 "trtype": "$TEST_TRANSPORT", 00:25:27.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.297 "adrfam": "ipv4", 00:25:27.297 "trsvcid": "$NVMF_PORT", 00:25:27.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.297 "hdgst": ${hdgst:-false}, 00:25:27.297 "ddgst": ${ddgst:-false} 00:25:27.297 }, 00:25:27.297 "method": "bdev_nvme_attach_controller" 00:25:27.297 } 00:25:27.297 EOF 00:25:27.297 )") 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.297 { 00:25:27.297 "params": { 00:25:27.297 "name": "Nvme$subsystem", 00:25:27.297 "trtype": "$TEST_TRANSPORT", 00:25:27.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.297 "adrfam": "ipv4", 00:25:27.297 "trsvcid": "$NVMF_PORT", 00:25:27.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.297 "hdgst": ${hdgst:-false}, 00:25:27.297 "ddgst": ${ddgst:-false} 00:25:27.297 }, 00:25:27.297 "method": "bdev_nvme_attach_controller" 00:25:27.297 } 00:25:27.297 EOF 00:25:27.297 )") 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.297 { 00:25:27.297 "params": { 00:25:27.297 "name": "Nvme$subsystem", 00:25:27.297 "trtype": "$TEST_TRANSPORT", 00:25:27.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.297 "adrfam": "ipv4", 00:25:27.297 "trsvcid": "$NVMF_PORT", 00:25:27.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.297 "hdgst": ${hdgst:-false}, 00:25:27.297 "ddgst": ${ddgst:-false} 00:25:27.297 }, 00:25:27.297 "method": "bdev_nvme_attach_controller" 00:25:27.297 } 00:25:27.297 EOF 00:25:27.297 )") 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.297 { 00:25:27.297 "params": { 00:25:27.297 "name": "Nvme$subsystem", 00:25:27.297 "trtype": "$TEST_TRANSPORT", 00:25:27.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.297 "adrfam": "ipv4", 00:25:27.297 "trsvcid": "$NVMF_PORT", 00:25:27.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.297 "hdgst": ${hdgst:-false}, 00:25:27.297 "ddgst": ${ddgst:-false} 00:25:27.297 }, 00:25:27.297 "method": "bdev_nvme_attach_controller" 00:25:27.297 } 00:25:27.297 EOF 00:25:27.297 )") 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.297 { 00:25:27.297 "params": { 00:25:27.297 "name": "Nvme$subsystem", 00:25:27.297 "trtype": "$TEST_TRANSPORT", 00:25:27.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.297 "adrfam": "ipv4", 00:25:27.297 "trsvcid": "$NVMF_PORT", 00:25:27.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.297 "hdgst": ${hdgst:-false}, 00:25:27.297 "ddgst": ${ddgst:-false} 00:25:27.297 }, 00:25:27.297 "method": "bdev_nvme_attach_controller" 00:25:27.297 } 00:25:27.297 EOF 00:25:27.297 )") 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.297 { 00:25:27.297 "params": { 00:25:27.297 "name": "Nvme$subsystem", 00:25:27.297 "trtype": "$TEST_TRANSPORT", 00:25:27.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.297 "adrfam": "ipv4", 00:25:27.297 "trsvcid": "$NVMF_PORT", 00:25:27.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.297 "hdgst": ${hdgst:-false}, 00:25:27.297 "ddgst": ${ddgst:-false} 00:25:27.297 }, 00:25:27.297 "method": "bdev_nvme_attach_controller" 00:25:27.297 } 00:25:27.297 EOF 00:25:27.297 )") 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.297 { 00:25:27.297 "params": { 00:25:27.297 "name": "Nvme$subsystem", 00:25:27.297 "trtype": "$TEST_TRANSPORT", 00:25:27.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.297 "adrfam": "ipv4", 00:25:27.297 "trsvcid": "$NVMF_PORT", 00:25:27.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.297 "hdgst": ${hdgst:-false}, 00:25:27.297 "ddgst": ${ddgst:-false} 00:25:27.297 }, 00:25:27.297 "method": "bdev_nvme_attach_controller" 00:25:27.297 } 00:25:27.297 EOF 00:25:27.297 )") 00:25:27.297 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:27.298 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.298 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.298 { 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme$subsystem", 00:25:27.298 "trtype": "$TEST_TRANSPORT", 00:25:27.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "$NVMF_PORT", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.298 "hdgst": ${hdgst:-false}, 00:25:27.298 "ddgst": ${ddgst:-false} 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 } 00:25:27.298 EOF 00:25:27.298 )") 00:25:27.298 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:27.298 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.298 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.298 { 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme$subsystem", 00:25:27.298 "trtype": "$TEST_TRANSPORT", 00:25:27.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "$NVMF_PORT", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.298 "hdgst": ${hdgst:-false}, 00:25:27.298 "ddgst": ${ddgst:-false} 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 } 00:25:27.298 EOF 00:25:27.298 )") 00:25:27.298 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:27.298 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:27.298 [2024-11-19 01:09:33.837740] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:27.298 [2024-11-19 01:09:33.837821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431449 ] 00:25:27.298 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:27.298 01:09:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme1", 00:25:27.298 "trtype": "rdma", 00:25:27.298 "traddr": "192.168.100.8", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "4420", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:27.298 "hdgst": false, 00:25:27.298 "ddgst": false 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 },{ 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme2", 00:25:27.298 "trtype": "rdma", 00:25:27.298 "traddr": "192.168.100.8", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "4420", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:27.298 "hdgst": false, 00:25:27.298 "ddgst": false 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 },{ 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme3", 00:25:27.298 "trtype": "rdma", 00:25:27.298 "traddr": "192.168.100.8", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "4420", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:27.298 "hdgst": false, 00:25:27.298 "ddgst": false 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 },{ 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme4", 00:25:27.298 "trtype": "rdma", 00:25:27.298 "traddr": "192.168.100.8", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "4420", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:27.298 "hdgst": false, 00:25:27.298 "ddgst": false 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 },{ 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme5", 00:25:27.298 "trtype": "rdma", 00:25:27.298 "traddr": "192.168.100.8", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "4420", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:27.298 "hdgst": false, 00:25:27.298 "ddgst": false 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 },{ 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme6", 00:25:27.298 "trtype": "rdma", 00:25:27.298 "traddr": "192.168.100.8", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "4420", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:27.298 "hdgst": false, 00:25:27.298 "ddgst": false 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 },{ 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme7", 00:25:27.298 "trtype": "rdma", 00:25:27.298 "traddr": "192.168.100.8", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "4420", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:27.298 "hdgst": false, 00:25:27.298 "ddgst": false 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 },{ 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme8", 00:25:27.298 "trtype": "rdma", 00:25:27.298 "traddr": "192.168.100.8", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "4420", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:27.298 "hdgst": false, 00:25:27.298 "ddgst": false 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 },{ 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme9", 00:25:27.298 "trtype": "rdma", 00:25:27.298 "traddr": "192.168.100.8", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "4420", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:27.298 "hdgst": false, 00:25:27.298 "ddgst": false 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 },{ 00:25:27.298 "params": { 00:25:27.298 "name": "Nvme10", 00:25:27.298 "trtype": "rdma", 00:25:27.298 "traddr": "192.168.100.8", 00:25:27.298 "adrfam": "ipv4", 00:25:27.298 "trsvcid": "4420", 00:25:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:27.298 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:27.298 "hdgst": false, 00:25:27.298 "ddgst": false 00:25:27.298 }, 00:25:27.298 "method": "bdev_nvme_attach_controller" 00:25:27.298 }' 00:25:27.298 [2024-11-19 01:09:33.966134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.557 [2024-11-19 01:09:34.083352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.931 Running I/O for 1 seconds... 00:25:29.865 3136.00 IOPS, 196.00 MiB/s 00:25:29.865 Latency(us) 00:25:29.865 [2024-11-19T00:09:36.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.865 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:29.865 Verification LBA range: start 0x0 length 0x400 00:25:29.865 Nvme1n1 : 1.19 322.55 20.16 0.00 0.00 195023.89 54925.41 183750.46 00:25:29.865 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:29.865 Verification LBA range: start 0x0 length 0x400 00:25:29.865 Nvme2n1 : 1.19 321.93 20.12 0.00 0.00 192651.13 66909.14 169769.45 00:25:29.865 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:29.865 Verification LBA range: start 0x0 length 0x400 00:25:29.865 Nvme3n1 : 1.20 321.31 20.08 0.00 0.00 190274.15 64412.53 149796.57 00:25:29.865 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:29.865 Verification LBA range: start 0x0 length 0x400 00:25:29.865 Nvme4n1 : 1.21 370.72 23.17 0.00 0.00 162577.10 6803.26 134816.91 00:25:29.865 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:29.865 Verification LBA range: start 0x0 length 0x400 00:25:29.865 Nvme5n1 : 1.21 347.91 21.74 0.00 0.00 169964.92 11734.06 139810.13 00:25:29.865 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:29.865 Verification LBA range: start 0x0 length 0x400 00:25:29.865 Nvme6n1 : 1.21 319.61 19.98 0.00 0.00 181337.31 11297.16 134816.91 00:25:29.865 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:29.865 Verification LBA range: start 0x0 length 0x400 00:25:29.865 Nvme7n1 : 1.21 339.06 21.19 0.00 0.00 168871.26 11047.50 117839.97 00:25:29.865 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:29.865 Verification LBA range: start 0x0 length 0x400 00:25:29.865 Nvme8n1 : 1.21 350.93 21.93 0.00 0.00 160907.80 11421.99 117340.65 00:25:29.865 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:29.865 Verification LBA range: start 0x0 length 0x400 00:25:29.865 Nvme9n1 : 1.20 319.03 19.94 0.00 0.00 174706.43 16352.79 116342.00 00:25:29.865 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:29.865 Verification LBA range: start 0x0 length 0x400 00:25:29.865 Nvme10n1 : 1.21 265.35 16.58 0.00 0.00 206503.16 16477.62 257650.10 00:25:29.865 [2024-11-19T00:09:36.558Z] =================================================================================================================== 00:25:29.865 [2024-11-19T00:09:36.558Z] Total : 3278.40 204.90 0.00 0.00 179189.89 6803.26 257650.10 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:31.240 rmmod nvme_rdma 00:25:31.240 rmmod nvme_fabrics 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 430690 ']' 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 430690 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 430690 ']' 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 430690 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 430690 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 430690' 00:25:31.240 killing process with pid 430690 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 430690 00:25:31.240 01:09:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 430690 00:25:34.524 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:34.524 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:34.524 00:25:34.524 real 0m17.382s 00:25:34.524 user 0m50.040s 00:25:34.524 sys 0m5.960s 00:25:34.524 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.524 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:34.524 ************************************ 00:25:34.524 END TEST nvmf_shutdown_tc1 00:25:34.524 ************************************ 00:25:34.524 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:34.524 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:34.524 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:34.524 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:34.524 ************************************ 00:25:34.524 START TEST nvmf_shutdown_tc2 00:25:34.524 ************************************ 00:25:34.524 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:34.525 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:34.525 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@405 -- # modinfo irdma 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:34.525 Found net devices under 0000:af:00.0: cvl_0_0 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.525 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:34.526 Found net devices under 0000:af:00.1: cvl_0_1 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:25:34.526 01:09:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:25:34.526 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:25:34.526 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:25:34.526 altname enp175s0f0np0 00:25:34.526 altname ens801f0np0 00:25:34.526 inet 192.168.100.8/24 scope global cvl_0_0 00:25:34.526 valid_lft forever preferred_lft forever 00:25:34.526 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:25:34.526 valid_lft forever preferred_lft forever 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:25:34.526 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:25:34.526 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:25:34.526 altname enp175s0f1np1 00:25:34.526 altname ens801f1np1 00:25:34.526 inet 192.168.100.9/24 scope global cvl_0_1 00:25:34.526 valid_lft forever preferred_lft forever 00:25:34.526 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:25:34.526 valid_lft forever preferred_lft forever 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:34.526 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:34.527 192.168.100.9' 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:34.527 192.168.100.9' 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:34.527 192.168.100.9' 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=432758 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 432758 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 432758 ']' 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.527 01:09:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:34.786 [2024-11-19 01:09:41.222238] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:34.786 [2024-11-19 01:09:41.222335] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.786 [2024-11-19 01:09:41.346961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.786 [2024-11-19 01:09:41.454920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.786 [2024-11-19 01:09:41.454964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.786 [2024-11-19 01:09:41.454974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.786 [2024-11-19 01:09:41.454999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.786 [2024-11-19 01:09:41.455008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.786 [2024-11-19 01:09:41.457419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.786 [2024-11-19 01:09:41.457460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.786 [2024-11-19 01:09:41.457554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:34.786 [2024-11-19 01:09:41.457533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.353 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.353 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:35.353 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:35.353 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:35.353 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:35.611 [2024-11-19 01:09:42.090961] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000029440/0x617000007c40) succeed. 00:25:35.611 [2024-11-19 01:09:42.100524] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x6120000295c0/0x617000007fc0) succeed. 00:25:35.611 [2024-11-19 01:09:42.100550] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.611 01:09:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:35.611 Malloc1 00:25:35.611 [2024-11-19 01:09:42.274087] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:35.869 Malloc2 00:25:35.869 Malloc3 00:25:36.128 Malloc4 00:25:36.128 Malloc5 00:25:36.128 Malloc6 00:25:36.386 Malloc7 00:25:36.386 Malloc8 00:25:36.386 Malloc9 00:25:36.644 Malloc10 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=433183 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 433183 /var/tmp/bdevperf.sock 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 433183 ']' 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:36.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:36.644 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:36.644 { 00:25:36.644 "params": { 00:25:36.644 "name": "Nvme$subsystem", 00:25:36.644 "trtype": "$TEST_TRANSPORT", 00:25:36.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.644 "adrfam": "ipv4", 00:25:36.644 "trsvcid": "$NVMF_PORT", 00:25:36.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.644 "hdgst": ${hdgst:-false}, 00:25:36.644 "ddgst": ${ddgst:-false} 00:25:36.644 }, 00:25:36.644 "method": "bdev_nvme_attach_controller" 00:25:36.644 } 00:25:36.644 EOF 00:25:36.644 )") 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:36.645 { 00:25:36.645 "params": { 00:25:36.645 "name": "Nvme$subsystem", 00:25:36.645 "trtype": "$TEST_TRANSPORT", 00:25:36.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.645 "adrfam": "ipv4", 00:25:36.645 "trsvcid": "$NVMF_PORT", 00:25:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.645 "hdgst": ${hdgst:-false}, 00:25:36.645 "ddgst": ${ddgst:-false} 00:25:36.645 }, 00:25:36.645 "method": "bdev_nvme_attach_controller" 00:25:36.645 } 00:25:36.645 EOF 00:25:36.645 )") 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:36.645 { 00:25:36.645 "params": { 00:25:36.645 "name": "Nvme$subsystem", 00:25:36.645 "trtype": "$TEST_TRANSPORT", 00:25:36.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.645 "adrfam": "ipv4", 00:25:36.645 "trsvcid": "$NVMF_PORT", 00:25:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.645 "hdgst": ${hdgst:-false}, 00:25:36.645 "ddgst": ${ddgst:-false} 00:25:36.645 }, 00:25:36.645 "method": "bdev_nvme_attach_controller" 00:25:36.645 } 00:25:36.645 EOF 00:25:36.645 )") 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:36.645 { 00:25:36.645 "params": { 00:25:36.645 "name": "Nvme$subsystem", 00:25:36.645 "trtype": "$TEST_TRANSPORT", 00:25:36.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.645 "adrfam": "ipv4", 00:25:36.645 "trsvcid": "$NVMF_PORT", 00:25:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.645 "hdgst": ${hdgst:-false}, 00:25:36.645 "ddgst": ${ddgst:-false} 00:25:36.645 }, 00:25:36.645 "method": "bdev_nvme_attach_controller" 00:25:36.645 } 00:25:36.645 EOF 00:25:36.645 )") 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:36.645 { 00:25:36.645 "params": { 00:25:36.645 "name": "Nvme$subsystem", 00:25:36.645 "trtype": "$TEST_TRANSPORT", 00:25:36.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.645 "adrfam": "ipv4", 00:25:36.645 "trsvcid": "$NVMF_PORT", 00:25:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.645 "hdgst": ${hdgst:-false}, 00:25:36.645 "ddgst": ${ddgst:-false} 00:25:36.645 }, 00:25:36.645 "method": "bdev_nvme_attach_controller" 00:25:36.645 } 00:25:36.645 EOF 00:25:36.645 )") 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:36.645 { 00:25:36.645 "params": { 00:25:36.645 "name": "Nvme$subsystem", 00:25:36.645 "trtype": "$TEST_TRANSPORT", 00:25:36.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.645 "adrfam": "ipv4", 00:25:36.645 "trsvcid": "$NVMF_PORT", 00:25:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.645 "hdgst": ${hdgst:-false}, 00:25:36.645 "ddgst": ${ddgst:-false} 00:25:36.645 }, 00:25:36.645 "method": "bdev_nvme_attach_controller" 00:25:36.645 } 00:25:36.645 EOF 00:25:36.645 )") 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:36.645 { 00:25:36.645 "params": { 00:25:36.645 "name": "Nvme$subsystem", 00:25:36.645 "trtype": "$TEST_TRANSPORT", 00:25:36.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.645 "adrfam": "ipv4", 00:25:36.645 "trsvcid": "$NVMF_PORT", 00:25:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.645 "hdgst": ${hdgst:-false}, 00:25:36.645 "ddgst": ${ddgst:-false} 00:25:36.645 }, 00:25:36.645 "method": "bdev_nvme_attach_controller" 00:25:36.645 } 00:25:36.645 EOF 00:25:36.645 )") 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:36.645 { 00:25:36.645 "params": { 00:25:36.645 "name": "Nvme$subsystem", 00:25:36.645 "trtype": "$TEST_TRANSPORT", 00:25:36.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.645 "adrfam": "ipv4", 00:25:36.645 "trsvcid": "$NVMF_PORT", 00:25:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.645 "hdgst": ${hdgst:-false}, 00:25:36.645 "ddgst": ${ddgst:-false} 00:25:36.645 }, 00:25:36.645 "method": "bdev_nvme_attach_controller" 00:25:36.645 } 00:25:36.645 EOF 00:25:36.645 )") 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:36.645 { 00:25:36.645 "params": { 00:25:36.645 "name": "Nvme$subsystem", 00:25:36.645 "trtype": "$TEST_TRANSPORT", 00:25:36.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.645 "adrfam": "ipv4", 00:25:36.645 "trsvcid": "$NVMF_PORT", 00:25:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.645 "hdgst": ${hdgst:-false}, 00:25:36.645 "ddgst": ${ddgst:-false} 00:25:36.645 }, 00:25:36.645 "method": "bdev_nvme_attach_controller" 00:25:36.645 } 00:25:36.645 EOF 00:25:36.645 )") 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:36.645 { 00:25:36.645 "params": { 00:25:36.645 "name": "Nvme$subsystem", 00:25:36.645 "trtype": "$TEST_TRANSPORT", 00:25:36.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.645 "adrfam": "ipv4", 00:25:36.645 "trsvcid": "$NVMF_PORT", 00:25:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.645 "hdgst": ${hdgst:-false}, 00:25:36.645 "ddgst": ${ddgst:-false} 00:25:36.645 }, 00:25:36.645 "method": "bdev_nvme_attach_controller" 00:25:36.645 } 00:25:36.645 EOF 00:25:36.645 )") 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:25:36.645 01:09:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:36.645 "params": { 00:25:36.645 "name": "Nvme1", 00:25:36.645 "trtype": "rdma", 00:25:36.645 "traddr": "192.168.100.8", 00:25:36.645 "adrfam": "ipv4", 00:25:36.645 "trsvcid": "4420", 00:25:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:36.645 "hdgst": false, 00:25:36.645 "ddgst": false 00:25:36.645 }, 00:25:36.645 "method": "bdev_nvme_attach_controller" 00:25:36.645 },{ 00:25:36.645 "params": { 00:25:36.645 "name": "Nvme2", 00:25:36.646 "trtype": "rdma", 00:25:36.646 "traddr": "192.168.100.8", 00:25:36.646 "adrfam": "ipv4", 00:25:36.646 "trsvcid": "4420", 00:25:36.646 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:36.646 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:36.646 "hdgst": false, 00:25:36.646 "ddgst": false 00:25:36.646 }, 00:25:36.646 "method": "bdev_nvme_attach_controller" 00:25:36.646 },{ 00:25:36.646 "params": { 00:25:36.646 "name": "Nvme3", 00:25:36.646 "trtype": "rdma", 00:25:36.646 "traddr": "192.168.100.8", 00:25:36.646 "adrfam": "ipv4", 00:25:36.646 "trsvcid": "4420", 00:25:36.646 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:36.646 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:36.646 "hdgst": false, 00:25:36.646 "ddgst": false 00:25:36.646 }, 00:25:36.646 "method": "bdev_nvme_attach_controller" 00:25:36.646 },{ 00:25:36.646 "params": { 00:25:36.646 "name": "Nvme4", 00:25:36.646 "trtype": "rdma", 00:25:36.646 "traddr": "192.168.100.8", 00:25:36.646 "adrfam": "ipv4", 00:25:36.646 "trsvcid": "4420", 00:25:36.646 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:36.646 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:36.646 "hdgst": false, 00:25:36.646 "ddgst": false 00:25:36.646 }, 00:25:36.646 "method": "bdev_nvme_attach_controller" 00:25:36.646 },{ 00:25:36.646 "params": { 00:25:36.646 "name": "Nvme5", 00:25:36.646 "trtype": "rdma", 00:25:36.646 "traddr": "192.168.100.8", 00:25:36.646 "adrfam": "ipv4", 00:25:36.646 "trsvcid": "4420", 00:25:36.646 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:36.646 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:36.646 "hdgst": false, 00:25:36.646 "ddgst": false 00:25:36.646 }, 00:25:36.646 "method": "bdev_nvme_attach_controller" 00:25:36.646 },{ 00:25:36.646 "params": { 00:25:36.646 "name": "Nvme6", 00:25:36.646 "trtype": "rdma", 00:25:36.646 "traddr": "192.168.100.8", 00:25:36.646 "adrfam": "ipv4", 00:25:36.646 "trsvcid": "4420", 00:25:36.646 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:36.646 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:36.646 "hdgst": false, 00:25:36.646 "ddgst": false 00:25:36.646 }, 00:25:36.646 "method": "bdev_nvme_attach_controller" 00:25:36.646 },{ 00:25:36.646 "params": { 00:25:36.646 "name": "Nvme7", 00:25:36.646 "trtype": "rdma", 00:25:36.646 "traddr": "192.168.100.8", 00:25:36.646 "adrfam": "ipv4", 00:25:36.646 "trsvcid": "4420", 00:25:36.646 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:36.646 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:36.646 "hdgst": false, 00:25:36.646 "ddgst": false 00:25:36.646 }, 00:25:36.646 "method": "bdev_nvme_attach_controller" 00:25:36.646 },{ 00:25:36.646 "params": { 00:25:36.646 "name": "Nvme8", 00:25:36.646 "trtype": "rdma", 00:25:36.646 "traddr": "192.168.100.8", 00:25:36.646 "adrfam": "ipv4", 00:25:36.646 "trsvcid": "4420", 00:25:36.646 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:36.646 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:36.646 "hdgst": false, 00:25:36.646 "ddgst": false 00:25:36.646 }, 00:25:36.646 "method": "bdev_nvme_attach_controller" 00:25:36.646 },{ 00:25:36.646 "params": { 00:25:36.646 "name": "Nvme9", 00:25:36.646 "trtype": "rdma", 00:25:36.646 "traddr": "192.168.100.8", 00:25:36.646 "adrfam": "ipv4", 00:25:36.646 "trsvcid": "4420", 00:25:36.646 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:36.646 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:36.646 "hdgst": false, 00:25:36.646 "ddgst": false 00:25:36.646 }, 00:25:36.646 "method": "bdev_nvme_attach_controller" 00:25:36.646 },{ 00:25:36.646 "params": { 00:25:36.646 "name": "Nvme10", 00:25:36.646 "trtype": "rdma", 00:25:36.646 "traddr": "192.168.100.8", 00:25:36.646 "adrfam": "ipv4", 00:25:36.646 "trsvcid": "4420", 00:25:36.646 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:36.646 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:36.646 "hdgst": false, 00:25:36.646 "ddgst": false 00:25:36.646 }, 00:25:36.646 "method": "bdev_nvme_attach_controller" 00:25:36.646 }' 00:25:36.646 [2024-11-19 01:09:43.292547] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:36.646 [2024-11-19 01:09:43.292632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433183 ] 00:25:36.905 [2024-11-19 01:09:43.420091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.905 [2024-11-19 01:09:43.539520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.279 Running I/O for 10 seconds... 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.279 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.537 01:09:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.537 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:38.537 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:38.537 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 433183 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 433183 ']' 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 433183 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 433183 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 433183' 00:25:38.795 killing process with pid 433183 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 433183 00:25:38.795 01:09:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 433183 00:25:39.053 Received shutdown signal, test time was about 0.795371 seconds 00:25:39.053 00:25:39.053 Latency(us) 00:25:39.053 [2024-11-19T00:09:45.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.053 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:39.053 Verification LBA range: start 0x0 length 0x400 00:25:39.053 Nvme1n1 : 0.79 325.89 20.37 0.00 0.00 192731.92 16103.13 186746.39 00:25:39.053 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:39.053 Verification LBA range: start 0x0 length 0x400 00:25:39.053 Nvme2n1 : 0.77 332.34 20.77 0.00 0.00 184745.57 11172.33 174762.67 00:25:39.053 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:39.053 Verification LBA range: start 0x0 length 0x400 00:25:39.053 Nvme3n1 : 0.77 331.33 20.71 0.00 0.00 181143.41 30084.14 154789.79 00:25:39.053 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:39.053 Verification LBA range: start 0x0 length 0x400 00:25:39.053 Nvme4n1 : 0.79 325.24 20.33 0.00 0.00 179098.82 12046.14 181753.17 00:25:39.053 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:39.054 Verification LBA range: start 0x0 length 0x400 00:25:39.054 Nvme5n1 : 0.78 329.78 20.61 0.00 0.00 173492.42 51430.16 126827.76 00:25:39.054 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:39.054 Verification LBA range: start 0x0 length 0x400 00:25:39.054 Nvme6n1 : 0.79 324.64 20.29 0.00 0.00 170804.30 9924.02 174762.67 00:25:39.054 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:39.054 Verification LBA range: start 0x0 length 0x400 00:25:39.054 Nvme7n1 : 0.79 324.02 20.25 0.00 0.00 166853.97 10111.27 166773.52 00:25:39.054 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:39.054 Verification LBA range: start 0x0 length 0x400 00:25:39.054 Nvme8n1 : 0.78 327.69 20.48 0.00 0.00 161484.80 48683.89 138811.49 00:25:39.054 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:39.054 Verification LBA range: start 0x0 length 0x400 00:25:39.054 Nvme9n1 : 0.79 323.07 20.19 0.00 0.00 159858.35 8800.55 158784.37 00:25:39.054 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:39.054 Verification LBA range: start 0x0 length 0x400 00:25:39.054 Nvme10n1 : 0.79 322.14 20.13 0.00 0.00 155808.67 10236.10 141807.42 00:25:39.054 [2024-11-19T00:09:45.747Z] =================================================================================================================== 00:25:39.054 [2024-11-19T00:09:45.747Z] Total : 3266.16 204.14 0.00 0.00 172602.22 8800.55 186746.39 00:25:39.988 01:09:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 432758 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:41.379 rmmod nvme_rdma 00:25:41.379 rmmod nvme_fabrics 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 432758 ']' 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 432758 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 432758 ']' 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 432758 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 432758 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 432758' 00:25:41.379 killing process with pid 432758 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 432758 00:25:41.379 01:09:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 432758 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:44.663 00:25:44.663 real 0m9.933s 00:25:44.663 user 0m39.694s 00:25:44.663 sys 0m1.480s 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:44.663 ************************************ 00:25:44.663 END TEST nvmf_shutdown_tc2 00:25:44.663 ************************************ 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:44.663 ************************************ 00:25:44.663 START TEST nvmf_shutdown_tc3 00:25:44.663 ************************************ 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:44.663 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:44.664 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:44.664 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@405 -- # modinfo irdma 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:44.664 Found net devices under 0000:af:00.0: cvl_0_0 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:44.664 Found net devices under 0000:af:00.1: cvl_0_1 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:44.664 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:44.665 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:44.665 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:44.665 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:44.665 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:44.665 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:44.665 01:09:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:25:44.665 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:25:44.665 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:25:44.665 altname enp175s0f0np0 00:25:44.665 altname ens801f0np0 00:25:44.665 inet 192.168.100.8/24 scope global cvl_0_0 00:25:44.665 valid_lft forever preferred_lft forever 00:25:44.665 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:25:44.665 valid_lft forever preferred_lft forever 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:25:44.665 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:25:44.665 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:25:44.665 altname enp175s0f1np1 00:25:44.665 altname ens801f1np1 00:25:44.665 inet 192.168.100.9/24 scope global cvl_0_1 00:25:44.665 valid_lft forever preferred_lft forever 00:25:44.665 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:25:44.665 valid_lft forever preferred_lft forever 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo cvl_0_0 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo cvl_0_1 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:44.665 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:44.666 192.168.100.9' 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:44.666 192.168.100.9' 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:44.666 192.168.100.9' 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=434506 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 434506 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 434506 ']' 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:44.666 01:09:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:44.666 [2024-11-19 01:09:51.236191] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:44.666 [2024-11-19 01:09:51.236283] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.924 [2024-11-19 01:09:51.363991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:44.924 [2024-11-19 01:09:51.473150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.924 [2024-11-19 01:09:51.473200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.924 [2024-11-19 01:09:51.473210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:44.924 [2024-11-19 01:09:51.473221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:44.924 [2024-11-19 01:09:51.473228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.924 [2024-11-19 01:09:51.475640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.924 [2024-11-19 01:09:51.475728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:44.924 [2024-11-19 01:09:51.475797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.924 [2024-11-19 01:09:51.475818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:45.491 [2024-11-19 01:09:52.102068] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000029440/0x617000007c40) succeed. 00:25:45.491 [2024-11-19 01:09:52.111645] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x6120000295c0/0x617000007fc0) succeed. 00:25:45.491 [2024-11-19 01:09:52.111673] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:45.491 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.492 01:09:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:45.750 Malloc1 00:25:45.750 [2024-11-19 01:09:52.283730] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:45.750 Malloc2 00:25:46.008 Malloc3 00:25:46.008 Malloc4 00:25:46.008 Malloc5 00:25:46.266 Malloc6 00:25:46.266 Malloc7 00:25:46.524 Malloc8 00:25:46.524 Malloc9 00:25:46.524 Malloc10 00:25:46.524 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.524 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:46.524 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:46.524 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=434936 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 434936 /var/tmp/bdevperf.sock 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 434936 ']' 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.783 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.783 { 00:25:46.783 "params": { 00:25:46.783 "name": "Nvme$subsystem", 00:25:46.783 "trtype": "$TEST_TRANSPORT", 00:25:46.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.784 "adrfam": "ipv4", 00:25:46.784 "trsvcid": "$NVMF_PORT", 00:25:46.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.784 "hdgst": ${hdgst:-false}, 00:25:46.784 "ddgst": ${ddgst:-false} 00:25:46.784 }, 00:25:46.784 "method": "bdev_nvme_attach_controller" 00:25:46.784 } 00:25:46.784 EOF 00:25:46.784 )") 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.784 { 00:25:46.784 "params": { 00:25:46.784 "name": "Nvme$subsystem", 00:25:46.784 "trtype": "$TEST_TRANSPORT", 00:25:46.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.784 "adrfam": "ipv4", 00:25:46.784 "trsvcid": "$NVMF_PORT", 00:25:46.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.784 "hdgst": ${hdgst:-false}, 00:25:46.784 "ddgst": ${ddgst:-false} 00:25:46.784 }, 00:25:46.784 "method": "bdev_nvme_attach_controller" 00:25:46.784 } 00:25:46.784 EOF 00:25:46.784 )") 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.784 { 00:25:46.784 "params": { 00:25:46.784 "name": "Nvme$subsystem", 00:25:46.784 "trtype": "$TEST_TRANSPORT", 00:25:46.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.784 "adrfam": "ipv4", 00:25:46.784 "trsvcid": "$NVMF_PORT", 00:25:46.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.784 "hdgst": ${hdgst:-false}, 00:25:46.784 "ddgst": ${ddgst:-false} 00:25:46.784 }, 00:25:46.784 "method": "bdev_nvme_attach_controller" 00:25:46.784 } 00:25:46.784 EOF 00:25:46.784 )") 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.784 { 00:25:46.784 "params": { 00:25:46.784 "name": "Nvme$subsystem", 00:25:46.784 "trtype": "$TEST_TRANSPORT", 00:25:46.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.784 "adrfam": "ipv4", 00:25:46.784 "trsvcid": "$NVMF_PORT", 00:25:46.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.784 "hdgst": ${hdgst:-false}, 00:25:46.784 "ddgst": ${ddgst:-false} 00:25:46.784 }, 00:25:46.784 "method": "bdev_nvme_attach_controller" 00:25:46.784 } 00:25:46.784 EOF 00:25:46.784 )") 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.784 { 00:25:46.784 "params": { 00:25:46.784 "name": "Nvme$subsystem", 00:25:46.784 "trtype": "$TEST_TRANSPORT", 00:25:46.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.784 "adrfam": "ipv4", 00:25:46.784 "trsvcid": "$NVMF_PORT", 00:25:46.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.784 "hdgst": ${hdgst:-false}, 00:25:46.784 "ddgst": ${ddgst:-false} 00:25:46.784 }, 00:25:46.784 "method": "bdev_nvme_attach_controller" 00:25:46.784 } 00:25:46.784 EOF 00:25:46.784 )") 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.784 { 00:25:46.784 "params": { 00:25:46.784 "name": "Nvme$subsystem", 00:25:46.784 "trtype": "$TEST_TRANSPORT", 00:25:46.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.784 "adrfam": "ipv4", 00:25:46.784 "trsvcid": "$NVMF_PORT", 00:25:46.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.784 "hdgst": ${hdgst:-false}, 00:25:46.784 "ddgst": ${ddgst:-false} 00:25:46.784 }, 00:25:46.784 "method": "bdev_nvme_attach_controller" 00:25:46.784 } 00:25:46.784 EOF 00:25:46.784 )") 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.784 { 00:25:46.784 "params": { 00:25:46.784 "name": "Nvme$subsystem", 00:25:46.784 "trtype": "$TEST_TRANSPORT", 00:25:46.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.784 "adrfam": "ipv4", 00:25:46.784 "trsvcid": "$NVMF_PORT", 00:25:46.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.784 "hdgst": ${hdgst:-false}, 00:25:46.784 "ddgst": ${ddgst:-false} 00:25:46.784 }, 00:25:46.784 "method": "bdev_nvme_attach_controller" 00:25:46.784 } 00:25:46.784 EOF 00:25:46.784 )") 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.784 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.784 { 00:25:46.784 "params": { 00:25:46.784 "name": "Nvme$subsystem", 00:25:46.784 "trtype": "$TEST_TRANSPORT", 00:25:46.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.784 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "$NVMF_PORT", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.785 "hdgst": ${hdgst:-false}, 00:25:46.785 "ddgst": ${ddgst:-false} 00:25:46.785 }, 00:25:46.785 "method": "bdev_nvme_attach_controller" 00:25:46.785 } 00:25:46.785 EOF 00:25:46.785 )") 00:25:46.785 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.785 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.785 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.785 { 00:25:46.785 "params": { 00:25:46.785 "name": "Nvme$subsystem", 00:25:46.785 "trtype": "$TEST_TRANSPORT", 00:25:46.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.785 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "$NVMF_PORT", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.785 "hdgst": ${hdgst:-false}, 00:25:46.785 "ddgst": ${ddgst:-false} 00:25:46.785 }, 00:25:46.785 "method": "bdev_nvme_attach_controller" 00:25:46.785 } 00:25:46.785 EOF 00:25:46.785 )") 00:25:46.785 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.785 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.785 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.785 { 00:25:46.785 "params": { 00:25:46.785 "name": "Nvme$subsystem", 00:25:46.785 "trtype": "$TEST_TRANSPORT", 00:25:46.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.785 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "$NVMF_PORT", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.785 "hdgst": ${hdgst:-false}, 00:25:46.785 "ddgst": ${ddgst:-false} 00:25:46.785 }, 00:25:46.785 "method": "bdev_nvme_attach_controller" 00:25:46.785 } 00:25:46.785 EOF 00:25:46.785 )") 00:25:46.785 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:46.785 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:25:46.785 [2024-11-19 01:09:53.315411] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:46.785 [2024-11-19 01:09:53.315497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434936 ] 00:25:46.785 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:25:46.785 01:09:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:46.785 "params": { 00:25:46.785 "name": "Nvme1", 00:25:46.785 "trtype": "rdma", 00:25:46.785 "traddr": "192.168.100.8", 00:25:46.785 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "4420", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.785 "hdgst": false, 00:25:46.785 "ddgst": false 00:25:46.785 }, 00:25:46.785 "method": "bdev_nvme_attach_controller" 00:25:46.785 },{ 00:25:46.785 "params": { 00:25:46.785 "name": "Nvme2", 00:25:46.785 "trtype": "rdma", 00:25:46.785 "traddr": "192.168.100.8", 00:25:46.785 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "4420", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:46.785 "hdgst": false, 00:25:46.785 "ddgst": false 00:25:46.785 }, 00:25:46.785 "method": "bdev_nvme_attach_controller" 00:25:46.785 },{ 00:25:46.785 "params": { 00:25:46.785 "name": "Nvme3", 00:25:46.785 "trtype": "rdma", 00:25:46.785 "traddr": "192.168.100.8", 00:25:46.785 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "4420", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:46.785 "hdgst": false, 00:25:46.785 "ddgst": false 00:25:46.785 }, 00:25:46.785 "method": "bdev_nvme_attach_controller" 00:25:46.785 },{ 00:25:46.785 "params": { 00:25:46.785 "name": "Nvme4", 00:25:46.785 "trtype": "rdma", 00:25:46.785 "traddr": "192.168.100.8", 00:25:46.785 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "4420", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:46.785 "hdgst": false, 00:25:46.785 "ddgst": false 00:25:46.785 }, 00:25:46.785 "method": "bdev_nvme_attach_controller" 00:25:46.785 },{ 00:25:46.785 "params": { 00:25:46.785 "name": "Nvme5", 00:25:46.785 "trtype": "rdma", 00:25:46.785 "traddr": "192.168.100.8", 00:25:46.785 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "4420", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:46.785 "hdgst": false, 00:25:46.785 "ddgst": false 00:25:46.785 }, 00:25:46.785 "method": "bdev_nvme_attach_controller" 00:25:46.785 },{ 00:25:46.785 "params": { 00:25:46.785 "name": "Nvme6", 00:25:46.785 "trtype": "rdma", 00:25:46.785 "traddr": "192.168.100.8", 00:25:46.785 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "4420", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:46.785 "hdgst": false, 00:25:46.785 "ddgst": false 00:25:46.785 }, 00:25:46.785 "method": "bdev_nvme_attach_controller" 00:25:46.785 },{ 00:25:46.785 "params": { 00:25:46.785 "name": "Nvme7", 00:25:46.785 "trtype": "rdma", 00:25:46.785 "traddr": "192.168.100.8", 00:25:46.785 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "4420", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:46.785 "hdgst": false, 00:25:46.785 "ddgst": false 00:25:46.785 }, 00:25:46.785 "method": "bdev_nvme_attach_controller" 00:25:46.785 },{ 00:25:46.785 "params": { 00:25:46.785 "name": "Nvme8", 00:25:46.785 "trtype": "rdma", 00:25:46.785 "traddr": "192.168.100.8", 00:25:46.785 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "4420", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:46.785 "hdgst": false, 00:25:46.785 "ddgst": false 00:25:46.785 }, 00:25:46.785 "method": "bdev_nvme_attach_controller" 00:25:46.785 },{ 00:25:46.785 "params": { 00:25:46.785 "name": "Nvme9", 00:25:46.785 "trtype": "rdma", 00:25:46.785 "traddr": "192.168.100.8", 00:25:46.785 "adrfam": "ipv4", 00:25:46.785 "trsvcid": "4420", 00:25:46.785 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:46.785 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:46.785 "hdgst": false, 00:25:46.786 "ddgst": false 00:25:46.786 }, 00:25:46.786 "method": "bdev_nvme_attach_controller" 00:25:46.786 },{ 00:25:46.786 "params": { 00:25:46.786 "name": "Nvme10", 00:25:46.786 "trtype": "rdma", 00:25:46.786 "traddr": "192.168.100.8", 00:25:46.786 "adrfam": "ipv4", 00:25:46.786 "trsvcid": "4420", 00:25:46.786 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:46.786 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:46.786 "hdgst": false, 00:25:46.786 "ddgst": false 00:25:46.786 }, 00:25:46.786 "method": "bdev_nvme_attach_controller" 00:25:46.786 }' 00:25:46.786 [2024-11-19 01:09:53.440440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.044 [2024-11-19 01:09:53.560976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.417 Running I/O for 10 seconds... 00:25:48.417 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:48.417 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:48.417 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:48.417 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.417 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:48.417 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.417 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:48.418 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:48.418 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:48.418 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:48.418 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:48.418 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:48.418 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:48.418 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:48.418 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:48.418 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:48.418 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.418 01:09:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:48.676 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.676 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=27 00:25:48.676 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 27 -ge 100 ']' 00:25:48.677 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:48.677 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 434506 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 434506 ']' 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 434506 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434506 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434506' 00:25:48.936 killing process with pid 434506 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 434506 00:25:48.936 01:09:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 434506 00:25:49.768 2579.00 IOPS, 161.19 MiB/s [2024-11-19T00:09:56.461Z] [2024-11-19 01:09:56.243391] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.768 [2024-11-19 01:09:56.243457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026ff480 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026ef3c0 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026df300 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026cf240 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026bf180 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026af0c0 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100269f000 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100268ef40 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100267ee80 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100266edc0 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100265ed00 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100264ec40 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100263eb80 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100262eac0 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100261ea00 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100260e940 len:0x10000 key:0xf41af884 00:25:49.768 [2024-11-19 01:09:56.243818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029effc0 len:0x10000 key:0xb61afa2d 00:25:49.768 [2024-11-19 01:09:56.243844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029dff00 len:0x10000 key:0xb61afa2d 00:25:49.768 [2024-11-19 01:09:56.243867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029cfe40 len:0x10000 key:0xb61afa2d 00:25:49.768 [2024-11-19 01:09:56.243889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029bfd80 len:0x10000 key:0xb61afa2d 00:25:49.768 [2024-11-19 01:09:56.243910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029afcc0 len:0x10000 key:0xb61afa2d 00:25:49.768 [2024-11-19 01:09:56.243932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100299fc00 len:0x10000 key:0xb61afa2d 00:25:49.768 [2024-11-19 01:09:56.243953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100298fb40 len:0x10000 key:0xb61afa2d 00:25:49.768 [2024-11-19 01:09:56.243974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.243986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100297fa80 len:0x10000 key:0xb61afa2d 00:25:49.768 [2024-11-19 01:09:56.243995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.244006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100296f9c0 len:0x10000 key:0xb61afa2d 00:25:49.768 [2024-11-19 01:09:56.244015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.244026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100295f900 len:0x10000 key:0xb61afa2d 00:25:49.768 [2024-11-19 01:09:56.244036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.768 [2024-11-19 01:09:56.244047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100294f840 len:0x10000 key:0xb61afa2d 00:25:49.768 [2024-11-19 01:09:56.244056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100293f780 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100292f6c0 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100291f600 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100290f540 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028ff480 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028ef3c0 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028df300 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028cf240 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028bf180 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028af0c0 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100289f000 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100288ef40 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100287ee80 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100286edc0 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100285ed00 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100284ec40 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100283eb80 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100282eac0 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100281ea00 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100280e940 len:0x10000 key:0xb61afa2d 00:25:49.769 [2024-11-19 01:09:56.244496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002beffc0 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bdff00 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bcfe40 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bbfd80 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bafcc0 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b9fc00 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b8fb40 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b7fa80 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b6f9c0 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b5f900 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b4f840 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b3f780 len:0x10000 key:0x56634e1c 00:25:49.769 [2024-11-19 01:09:56.244749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.769 [2024-11-19 01:09:56.244761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b2f6c0 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.244770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.244781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b1f600 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.244790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.244802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b0f540 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.244811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.244822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aff480 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.244832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.244844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100270f540 len:0x10000 key:0xf41af884 00:25:49.770 [2024-11-19 01:09:56.244854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245639] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.770 [2024-11-19 01:09:56.245664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf300 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf240 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf180 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf0c0 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f000 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8ef40 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7ee80 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6edc0 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5ed00 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4ec40 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3eb80 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2eac0 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1ea00 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0e940 len:0x10000 key:0x56634e1c 00:25:49.770 [2024-11-19 01:09:56.245960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002deffc0 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.245980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.245991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff00 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcfe40 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfd80 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafcc0 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fc00 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fb40 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fa80 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6f9c0 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5f900 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4f840 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3f780 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2f6c0 len:0x10000 key:0xf7b81c08 00:25:49.770 [2024-11-19 01:09:56.246234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.770 [2024-11-19 01:09:56.246245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f600 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f540 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff480 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef3c0 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf300 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ccf240 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cbf180 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002caf0c0 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c9f000 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c8ef40 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c7ee80 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c6edc0 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c5ed00 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c4ec40 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c3eb80 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c2eac0 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c1ea00 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c0e940 len:0x10000 key:0xf7b81c08 00:25:49.771 [2024-11-19 01:09:56.246639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002feffc0 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fdff00 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fcfe40 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fbfd80 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fafcc0 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f9fc00 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f8fb40 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f7fa80 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f6f9c0 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f5f900 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f4f840 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f3f780 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f2f6c0 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f1f600 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.771 [2024-11-19 01:09:56.246944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f0f540 len:0x10000 key:0x7daf6bd5 00:25:49.771 [2024-11-19 01:09:56.246953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.246964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eff480 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.246974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.246985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eef3c0 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.246994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002edf300 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aef3c0 len:0x10000 key:0x56634e1c 00:25:49.772 [2024-11-19 01:09:56.247039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247696] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.772 [2024-11-19 01:09:56.247717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf180 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf0c0 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f000 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8ef40 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7ee80 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6edc0 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5ed00 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4ec40 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3eb80 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2eac0 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1ea00 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.247961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0e940 len:0x10000 key:0x7daf6bd5 00:25:49.772 [2024-11-19 01:09:56.247970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031effc0 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff00 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cfe40 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfd80 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afcc0 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fc00 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fb40 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fa80 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316f9c0 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315f900 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314f840 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313f780 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312f6c0 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f600 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f540 len:0x10000 key:0xa797a850 00:25:49.772 [2024-11-19 01:09:56.256454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.772 [2024-11-19 01:09:56.256465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff480 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef3c0 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df300 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf240 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf180 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af0c0 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f000 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308ef40 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307ee80 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306edc0 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305ed00 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304ec40 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303eb80 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302eac0 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301ea00 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300e940 len:0x10000 key:0xa797a850 00:25:49.773 [2024-11-19 01:09:56.256826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033effc0 len:0x10000 key:0xd5292bf0 00:25:49.773 [2024-11-19 01:09:56.256847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff00 len:0x10000 key:0xd5292bf0 00:25:49.773 [2024-11-19 01:09:56.256868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cfe40 len:0x10000 key:0xd5292bf0 00:25:49.773 [2024-11-19 01:09:56.256889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfd80 len:0x10000 key:0xd5292bf0 00:25:49.773 [2024-11-19 01:09:56.256911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afcc0 len:0x10000 key:0xd5292bf0 00:25:49.773 [2024-11-19 01:09:56.256934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fc00 len:0x10000 key:0xd5292bf0 00:25:49.773 [2024-11-19 01:09:56.256955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fb40 len:0x10000 key:0xd5292bf0 00:25:49.773 [2024-11-19 01:09:56.256976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.256988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fa80 len:0x10000 key:0xd5292bf0 00:25:49.773 [2024-11-19 01:09:56.256997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.257009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336f9c0 len:0x10000 key:0xd5292bf0 00:25:49.773 [2024-11-19 01:09:56.257018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.257030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335f900 len:0x10000 key:0xd5292bf0 00:25:49.773 [2024-11-19 01:09:56.257040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.257051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334f840 len:0x10000 key:0xd5292bf0 00:25:49.773 [2024-11-19 01:09:56.257061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.773 [2024-11-19 01:09:56.257073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333f780 len:0x10000 key:0xd5292bf0 00:25:49.774 [2024-11-19 01:09:56.257082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.257094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332f6c0 len:0x10000 key:0xd5292bf0 00:25:49.774 [2024-11-19 01:09:56.257103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.257115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f600 len:0x10000 key:0xd5292bf0 00:25:49.774 [2024-11-19 01:09:56.257125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.257136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f540 len:0x10000 key:0xd5292bf0 00:25:49.774 [2024-11-19 01:09:56.257146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.257157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff480 len:0x10000 key:0xd5292bf0 00:25:49.774 [2024-11-19 01:09:56.257168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.257180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef3c0 len:0x10000 key:0xd5292bf0 00:25:49.774 [2024-11-19 01:09:56.257189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.257201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df300 len:0x10000 key:0xd5292bf0 00:25:49.774 [2024-11-19 01:09:56.257210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.257222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf240 len:0x10000 key:0xd5292bf0 00:25:49.774 [2024-11-19 01:09:56.257231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.257243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf180 len:0x10000 key:0xd5292bf0 00:25:49.774 [2024-11-19 01:09:56.257253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.257264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf240 len:0x10000 key:0x7daf6bd5 00:25:49.774 [2024-11-19 01:09:56.257274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32541 cdw0:0 sqhd:2760 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.258023] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.774 [2024-11-19 01:09:56.258055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.258072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:0 sqhd:4e60 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.258089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.258102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:0 sqhd:4e60 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.258116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.258129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:0 sqhd:4e60 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.258142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.258155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:0 sqhd:4e60 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.258507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.774 [2024-11-19 01:09:56.258529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:49.774 [2024-11-19 01:09:56.258557] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.774 [2024-11-19 01:09:56.258578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.258593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.258608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.258621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.258635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.258647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.258660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.258672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.258945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.774 [2024-11-19 01:09:56.258964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:49.774 [2024-11-19 01:09:56.258991] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.774 [2024-11-19 01:09:56.259008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.259022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.259036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.259049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.259062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.259075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.259089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.259102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.259412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.774 [2024-11-19 01:09:56.259430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:49.774 [2024-11-19 01:09:56.259456] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.774 [2024-11-19 01:09:56.259473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.259487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:0 sqhd:5d60 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.259501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.259518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:0 sqhd:5d60 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.259532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.259545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:0 sqhd:5d60 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.259558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.259571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:0 sqhd:5d60 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.259860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.774 [2024-11-19 01:09:56.259879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:49.774 [2024-11-19 01:09:56.259901] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.774 [2024-11-19 01:09:56.259917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.774 [2024-11-19 01:09:56.259930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.774 [2024-11-19 01:09:56.259945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.259958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.259972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.259984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.259997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.260010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.260279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.775 [2024-11-19 01:09:56.260302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:49.775 [2024-11-19 01:09:56.260327] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.775 [2024-11-19 01:09:56.260344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.260358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.260372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.260384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.260397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.260410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.260426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.260439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.260719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.775 [2024-11-19 01:09:56.260736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:49.775 [2024-11-19 01:09:56.260759] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.775 [2024-11-19 01:09:56.260775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.260788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.260802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.260814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.260828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.260848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.260861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.260874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.261141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.775 [2024-11-19 01:09:56.261158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:25:49.775 [2024-11-19 01:09:56.261171] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:25:49.775 [2024-11-19 01:09:56.261191] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.775 [2024-11-19 01:09:56.261206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.261219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.261233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.261245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.261258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.261270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.261284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.261303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.261550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.775 [2024-11-19 01:09:56.261567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:49.775 [2024-11-19 01:09:56.261580] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:25:49.775 [2024-11-19 01:09:56.261600] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.775 [2024-11-19 01:09:56.261616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.261629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.261642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.261655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.261669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.261682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.261695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.261707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.261969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.775 [2024-11-19 01:09:56.261986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:25:49.775 [2024-11-19 01:09:56.261998] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:25:49.775 [2024-11-19 01:09:56.262020] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.775 [2024-11-19 01:09:56.262035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.262049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.262063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.262076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.262090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.262102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.262116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.775 [2024-11-19 01:09:56.262128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.775 [2024-11-19 01:09:56.262412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.775 [2024-11-19 01:09:56.262433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:49.775 [2024-11-19 01:09:56.262474] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.775 [2024-11-19 01:09:56.263065] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:25:49.775 [2024-11-19 01:09:56.263090] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.775 [2024-11-19 01:09:56.263727] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:25:49.775 [2024-11-19 01:09:56.263751] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.776 [2024-11-19 01:09:56.264381] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:25:49.776 [2024-11-19 01:09:56.264404] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.776 [2024-11-19 01:09:56.264989] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:25:49.776 [2024-11-19 01:09:56.265011] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.776 [2024-11-19 01:09:56.265581] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:25:49.776 [2024-11-19 01:09:56.265606] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.776 [2024-11-19 01:09:56.266232] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:25:49.776 [2024-11-19 01:09:56.266256] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.776 [2024-11-19 01:09:56.266275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100231f600 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100230f540 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022ff480 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022ef3c0 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022df300 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022cf240 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022bf180 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022af0c0 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100229f000 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100228ef40 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100227ee80 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100226edc0 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100225ed00 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100224ec40 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100223eb80 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100222eac0 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100221ea00 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100220e940 len:0x10000 key:0xe59c2d58 00:25:49.776 [2024-11-19 01:09:56.266889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025effc0 len:0x10000 key:0x6bac99b5 00:25:49.776 [2024-11-19 01:09:56.266921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025dff00 len:0x10000 key:0x6bac99b5 00:25:49.776 [2024-11-19 01:09:56.266953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.266972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025cfe40 len:0x10000 key:0x6bac99b5 00:25:49.776 [2024-11-19 01:09:56.266986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.267005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025bfd80 len:0x10000 key:0x6bac99b5 00:25:49.776 [2024-11-19 01:09:56.267018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.267037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025afcc0 len:0x10000 key:0x6bac99b5 00:25:49.776 [2024-11-19 01:09:56.267051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.267069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100259fc00 len:0x10000 key:0x6bac99b5 00:25:49.776 [2024-11-19 01:09:56.267083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.267102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100258fb40 len:0x10000 key:0x6bac99b5 00:25:49.776 [2024-11-19 01:09:56.267115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.267134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100257fa80 len:0x10000 key:0x6bac99b5 00:25:49.776 [2024-11-19 01:09:56.267147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.267167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100256f9c0 len:0x10000 key:0x6bac99b5 00:25:49.776 [2024-11-19 01:09:56.267180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.776 [2024-11-19 01:09:56.267201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100255f900 len:0x10000 key:0x6bac99b5 00:25:49.776 [2024-11-19 01:09:56.267215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100254f840 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100253f780 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100252f6c0 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100251f600 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100250f540 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024ff480 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024ef3c0 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024df300 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024cf240 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024bf180 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024af0c0 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100249f000 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100248ef40 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100247ee80 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100246edc0 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100245ed00 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100244ec40 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100243eb80 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100242eac0 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100241ea00 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100240e940 len:0x10000 key:0x6bac99b5 00:25:49.777 [2024-11-19 01:09:56.267958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.267973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027effc0 len:0x10000 key:0xf41af884 00:25:49.777 [2024-11-19 01:09:56.267986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.268002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027dff00 len:0x10000 key:0xf41af884 00:25:49.777 [2024-11-19 01:09:56.268012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.268027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027cfe40 len:0x10000 key:0xf41af884 00:25:49.777 [2024-11-19 01:09:56.268038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.268054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027bfd80 len:0x10000 key:0xf41af884 00:25:49.777 [2024-11-19 01:09:56.268064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.268079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027afcc0 len:0x10000 key:0xf41af884 00:25:49.777 [2024-11-19 01:09:56.268089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.268103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100279fc00 len:0x10000 key:0xf41af884 00:25:49.777 [2024-11-19 01:09:56.268114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.268129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100278fb40 len:0x10000 key:0xf41af884 00:25:49.777 [2024-11-19 01:09:56.268139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.268154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100277fa80 len:0x10000 key:0xf41af884 00:25:49.777 [2024-11-19 01:09:56.268165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.268180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100276f9c0 len:0x10000 key:0xf41af884 00:25:49.777 [2024-11-19 01:09:56.268190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.268206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100275f900 len:0x10000 key:0xf41af884 00:25:49.777 [2024-11-19 01:09:56.268216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.268231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100274f840 len:0x10000 key:0xf41af884 00:25:49.777 [2024-11-19 01:09:56.268241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.777 [2024-11-19 01:09:56.268255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100273f780 len:0x10000 key:0xf41af884 00:25:49.778 [2024-11-19 01:09:56.268266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.778 [2024-11-19 01:09:56.268282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100272f6c0 len:0x10000 key:0xf41af884 00:25:49.778 [2024-11-19 01:09:56.268297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.778 [2024-11-19 01:09:56.268312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100271f600 len:0x10000 key:0xf41af884 00:25:49.778 [2024-11-19 01:09:56.268323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.778 [2024-11-19 01:09:56.268338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100232f6c0 len:0x10000 key:0xe59c2d58 00:25:49.778 [2024-11-19 01:09:56.268348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.778 [2024-11-19 01:09:56.303000] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:25:49.778 [2024-11-19 01:09:56.305804] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:25:49.778 [2024-11-19 01:09:56.305831] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:25:49.778 [2024-11-19 01:09:56.305844] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:25:49.778 [2024-11-19 01:09:56.305857] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:25:49.778 [2024-11-19 01:09:56.305871] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:25:49.778 [2024-11-19 01:09:56.305883] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:25:49.778 [2024-11-19 01:09:56.305895] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:25:49.778 [2024-11-19 01:09:56.305908] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:25:49.778 [2024-11-19 01:09:56.305920] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:25:49.778 [2024-11-19 01:09:56.305932] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:25:49.778 [2024-11-19 01:09:56.306051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:25:49.778 [2024-11-19 01:09:56.306070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:25:49.778 [2024-11-19 01:09:56.306082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:49.778 [2024-11-19 01:09:56.306093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:49.778 [2024-11-19 01:09:56.306110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:49.778 [2024-11-19 01:09:56.307676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:49.778 [2024-11-19 01:09:56.307703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:49.778 [2024-11-19 01:09:56.307715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:49.778 [2024-11-19 01:09:56.307726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:49.778 [2024-11-19 01:09:56.307740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:49.778 [2024-11-19 01:09:56.325304] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:49.778 [2024-11-19 01:09:56.325345] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:49.778 [2024-11-19 01:09:56.325356] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016f7b100 00:25:49.778 [2024-11-19 01:09:56.325378] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:49.778 [2024-11-19 01:09:56.325389] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:49.778 [2024-11-19 01:09:56.325396] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016f7acc0 00:25:49.778 [2024-11-19 01:09:56.325412] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:49.778 [2024-11-19 01:09:56.325421] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:49.778 [2024-11-19 01:09:56.325429] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016f7e180 00:25:49.778 [2024-11-19 01:09:56.325443] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:49.778 [2024-11-19 01:09:56.325453] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:49.778 [2024-11-19 01:09:56.325460] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:25:49.778 [2024-11-19 01:09:56.325475] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:49.778 [2024-11-19 01:09:56.325484] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:49.778 [2024-11-19 01:09:56.325492] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200007fff240 00:25:49.778 [2024-11-19 01:09:56.325545] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:49.778 [2024-11-19 01:09:56.325560] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:49.778 [2024-11-19 01:09:56.325569] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fa6280 00:25:49.778 [2024-11-19 01:09:56.325586] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:49.778 [2024-11-19 01:09:56.325595] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:49.778 [2024-11-19 01:09:56.325603] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fd01c0 00:25:49.778 [2024-11-19 01:09:56.325621] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:49.778 [2024-11-19 01:09:56.325631] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:49.778 [2024-11-19 01:09:56.325638] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fd0c00 00:25:49.778 [2024-11-19 01:09:56.325656] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:49.778 [2024-11-19 01:09:56.325665] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:49.778 [2024-11-19 01:09:56.325673] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fd14c0 00:25:49.778 [2024-11-19 01:09:56.325693] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:49.778 [2024-11-19 01:09:56.325702] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:49.778 [2024-11-19 01:09:56.325709] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fd8140 00:25:50.345 task offset: 32768 on job bdev=Nvme7n1 fails 00:25:50.345 00:25:50.345 Latency(us) 00:25:50.345 [2024-11-19T00:09:57.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.345 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.345 Job: Nvme1n1 ended in about 1.98 seconds with error 00:25:50.345 Verification LBA range: start 0x0 length 0x400 00:25:50.345 Nvme1n1 : 1.98 141.24 8.83 32.28 0.00 364471.06 3448.44 1002638.38 00:25:50.345 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.345 Job: Nvme2n1 ended in about 1.98 seconds with error 00:25:50.345 Verification LBA range: start 0x0 length 0x400 00:25:50.345 Nvme2n1 : 1.98 158.84 9.93 32.27 0.00 328024.85 8613.30 1002638.38 00:25:50.345 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.345 Job: Nvme3n1 ended in about 1.98 seconds with error 00:25:50.345 Verification LBA range: start 0x0 length 0x400 00:25:50.345 Nvme3n1 : 1.98 129.05 8.07 32.26 0.00 385257.67 20222.54 1002638.38 00:25:50.345 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.345 Job: Nvme4n1 ended in about 1.98 seconds with error 00:25:50.345 Verification LBA range: start 0x0 length 0x400 00:25:50.345 Nvme4n1 : 1.98 129.00 8.06 32.25 0.00 381726.92 37199.48 1002638.38 00:25:50.345 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.345 Job: Nvme5n1 ended in about 1.99 seconds with error 00:25:50.345 Verification LBA range: start 0x0 length 0x400 00:25:50.345 Nvme5n1 : 1.99 128.96 8.06 32.24 0.00 378203.18 54426.09 1006632.96 00:25:50.345 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.345 Job: Nvme6n1 ended in about 1.99 seconds with error 00:25:50.345 Verification LBA range: start 0x0 length 0x400 00:25:50.345 Nvme6n1 : 1.99 128.92 8.06 32.23 0.00 374659.17 71403.03 1006632.96 00:25:50.345 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.345 Job: Nvme7n1 ended in about 1.52 seconds with error 00:25:50.345 Verification LBA range: start 0x0 length 0x400 00:25:50.345 Nvme7n1 : 1.52 168.54 10.53 42.13 0.00 280423.86 64412.53 663099.49 00:25:50.345 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.345 Job: Nvme8n1 ended in about 1.55 seconds with error 00:25:50.345 Verification LBA range: start 0x0 length 0x400 00:25:50.345 Nvme8n1 : 1.55 164.76 10.30 41.19 0.00 284038.88 47685.24 683072.37 00:25:50.345 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.345 Job: Nvme9n1 ended in about 1.55 seconds with error 00:25:50.345 Verification LBA range: start 0x0 length 0x400 00:25:50.345 Nvme9n1 : 1.55 164.66 10.29 41.17 0.00 280694.00 30708.30 667094.06 00:25:50.345 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.345 Job: Nvme10n1 ended in about 1.56 seconds with error 00:25:50.345 Verification LBA range: start 0x0 length 0x400 00:25:50.345 Nvme10n1 : 1.56 123.43 7.71 41.14 0.00 346511.85 31082.79 647121.19 00:25:50.345 [2024-11-19T00:09:57.038Z] =================================================================================================================== 00:25:50.345 [2024-11-19T00:09:57.038Z] Total : 1437.40 89.84 359.17 0.00 340232.27 3448.44 1006632.96 00:25:50.345 [2024-11-19 01:09:56.870780] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:50.913 [2024-11-19 01:09:57.328317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:50.914 [2024-11-19 01:09:57.328369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:25:50.914 [2024-11-19 01:09:57.328593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:50.914 [2024-11-19 01:09:57.328608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:25:50.914 [2024-11-19 01:09:57.328828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:50.914 [2024-11-19 01:09:57.328842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:50.914 [2024-11-19 01:09:57.329048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:50.914 [2024-11-19 01:09:57.329061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:50.914 [2024-11-19 01:09:57.329251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:50.914 [2024-11-19 01:09:57.329264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:50.914 [2024-11-19 01:09:57.329487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:50.914 [2024-11-19 01:09:57.329504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:50.914 [2024-11-19 01:09:57.329718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:50.914 [2024-11-19 01:09:57.329732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:50.914 [2024-11-19 01:09:57.329921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:50.914 [2024-11-19 01:09:57.329935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:50.914 [2024-11-19 01:09:57.330142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:50.914 [2024-11-19 01:09:57.330156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:50.914 [2024-11-19 01:09:57.330364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:50.914 [2024-11-19 01:09:57.330379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:50.914 [2024-11-19 01:09:57.330388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:25:50.914 [2024-11-19 01:09:57.330400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:25:50.914 [2024-11-19 01:09:57.330410] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:25:50.914 [2024-11-19 01:09:57.330425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:25:50.914 [2024-11-19 01:09:57.330446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:25:50.914 [2024-11-19 01:09:57.330455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:25:50.914 [2024-11-19 01:09:57.330464] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:25:50.914 [2024-11-19 01:09:57.330473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:25:50.914 [2024-11-19 01:09:57.330488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:50.914 [2024-11-19 01:09:57.330497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:50.914 [2024-11-19 01:09:57.330506] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:25:50.914 [2024-11-19 01:09:57.330515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:50.914 [2024-11-19 01:09:57.330528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:50.914 [2024-11-19 01:09:57.330536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:50.914 [2024-11-19 01:09:57.330545] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:25:50.914 [2024-11-19 01:09:57.330554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:50.914 [2024-11-19 01:09:57.330570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:50.914 [2024-11-19 01:09:57.330579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:50.914 [2024-11-19 01:09:57.330587] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:25:50.914 [2024-11-19 01:09:57.330596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:50.914 [2024-11-19 01:09:57.330673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:50.914 [2024-11-19 01:09:57.330683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:50.914 [2024-11-19 01:09:57.330691] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:25:50.914 [2024-11-19 01:09:57.330700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:50.914 [2024-11-19 01:09:57.330711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:50.914 [2024-11-19 01:09:57.330720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:50.914 [2024-11-19 01:09:57.330728] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:25:50.914 [2024-11-19 01:09:57.330737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:50.914 [2024-11-19 01:09:57.330748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:50.914 [2024-11-19 01:09:57.330757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:50.914 [2024-11-19 01:09:57.330765] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:25:50.914 [2024-11-19 01:09:57.330775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:50.914 [2024-11-19 01:09:57.330786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:50.914 [2024-11-19 01:09:57.330794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:50.914 [2024-11-19 01:09:57.330802] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:25:50.914 [2024-11-19 01:09:57.330811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:50.914 [2024-11-19 01:09:57.330821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:50.914 [2024-11-19 01:09:57.330832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:50.914 [2024-11-19 01:09:57.330840] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:25:50.914 [2024-11-19 01:09:57.330849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:52.290 01:09:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:25:53.228 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 434936 00:25:53.228 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:25:53.228 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 434936 00:25:53.228 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:53.228 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.228 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:25:53.228 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.228 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 434936 00:25:53.228 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:53.229 rmmod nvme_rdma 00:25:53.229 rmmod nvme_fabrics 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 434506 ']' 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 434506 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 434506 ']' 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 434506 00:25:53.229 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (434506) - No such process 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 434506 is not found' 00:25:53.229 Process with pid 434506 is not found 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:53.229 00:25:53.229 real 0m8.892s 00:25:53.229 user 0m32.222s 00:25:53.229 sys 0m1.652s 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:53.229 ************************************ 00:25:53.229 END TEST nvmf_shutdown_tc3 00:25:53.229 ************************************ 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ rdma == \r\d\m\a ]] 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:25:53.229 00:25:53.229 real 0m36.627s 00:25:53.229 user 2m2.152s 00:25:53.229 sys 0m9.345s 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:53.229 ************************************ 00:25:53.229 END TEST nvmf_shutdown 00:25:53.229 ************************************ 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:53.229 ************************************ 00:25:53.229 START TEST nvmf_nsid 00:25:53.229 ************************************ 00:25:53.229 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:25:53.489 * Looking for test storage... 00:25:53.490 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:25:53.490 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.490 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.490 01:09:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.490 --rc genhtml_branch_coverage=1 00:25:53.490 --rc genhtml_function_coverage=1 00:25:53.490 --rc genhtml_legend=1 00:25:53.490 --rc geninfo_all_blocks=1 00:25:53.490 --rc geninfo_unexecuted_blocks=1 00:25:53.490 00:25:53.490 ' 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.490 --rc genhtml_branch_coverage=1 00:25:53.490 --rc genhtml_function_coverage=1 00:25:53.490 --rc genhtml_legend=1 00:25:53.490 --rc geninfo_all_blocks=1 00:25:53.490 --rc geninfo_unexecuted_blocks=1 00:25:53.490 00:25:53.490 ' 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.490 --rc genhtml_branch_coverage=1 00:25:53.490 --rc genhtml_function_coverage=1 00:25:53.490 --rc genhtml_legend=1 00:25:53.490 --rc geninfo_all_blocks=1 00:25:53.490 --rc geninfo_unexecuted_blocks=1 00:25:53.490 00:25:53.490 ' 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.490 --rc genhtml_branch_coverage=1 00:25:53.490 --rc genhtml_function_coverage=1 00:25:53.490 --rc genhtml_legend=1 00:25:53.490 --rc geninfo_all_blocks=1 00:25:53.490 --rc geninfo_unexecuted_blocks=1 00:25:53.490 00:25:53.490 ' 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:25:53.490 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.491 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.491 01:10:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:00.064 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:00.064 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@405 -- # modinfo irdma 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:00.064 Found net devices under 0000:af:00.0: cvl_0_0 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:00.064 Found net devices under 0000:af:00.1: cvl_0_1 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:00.064 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:26:00.065 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:00.065 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:26:00.065 altname enp175s0f0np0 00:26:00.065 altname ens801f0np0 00:26:00.065 inet 192.168.100.8/24 scope global cvl_0_0 00:26:00.065 valid_lft forever preferred_lft forever 00:26:00.065 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:26:00.065 valid_lft forever preferred_lft forever 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:26:00.065 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:00.065 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:26:00.065 altname enp175s0f1np1 00:26:00.065 altname ens801f1np1 00:26:00.065 inet 192.168.100.9/24 scope global cvl_0_1 00:26:00.065 valid_lft forever preferred_lft forever 00:26:00.065 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:26:00.065 valid_lft forever preferred_lft forever 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:00.065 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:00.066 192.168.100.9' 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:00.066 192.168.100.9' 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:00.066 192.168.100.9' 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=439127 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 439127 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 439127 ']' 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.066 01:10:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:00.066 [2024-11-19 01:10:05.917375] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:00.066 [2024-11-19 01:10:05.917468] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.066 [2024-11-19 01:10:06.043445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.066 [2024-11-19 01:10:06.144709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.066 [2024-11-19 01:10:06.144759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.066 [2024-11-19 01:10:06.144769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.066 [2024-11-19 01:10:06.144779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.066 [2024-11-19 01:10:06.144787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.066 [2024-11-19 01:10:06.146019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.066 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.066 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:00.066 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:00.066 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:00.066 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=439184 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=2218c665-ae39-43ff-8fc2-ba04e201c125 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=d30775bf-0e31-47e4-aed2-49cff3ce3558 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2b0cf8c3-09f3-4c63-8fd8-333adbd3f15a 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:00.326 null0 00:26:00.326 null1 00:26:00.326 null2 00:26:00.326 [2024-11-19 01:10:06.828902] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000029740/0x617000007fc0) succeed. 00:26:00.326 [2024-11-19 01:10:06.837703] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:00.326 [2024-11-19 01:10:06.837785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439184 ] 00:26:00.326 [2024-11-19 01:10:06.838101] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x6120000298c0/0x617000008340) succeed. 00:26:00.326 [2024-11-19 01:10:06.838129] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:00.326 [2024-11-19 01:10:06.841163] iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/3071 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:26:00.326 [2024-11-19 01:10:06.841189] iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:26:00.326 [2024-11-19 01:10:06.843368] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:26:00.326 [2024-11-19 01:10:06.863639] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 439184 /var/tmp/tgt2.sock 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 439184 ']' 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:26:00.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.326 01:10:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:00.326 [2024-11-19 01:10:06.959364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.585 [2024-11-19 01:10:07.072573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.522 01:10:07 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.522 01:10:07 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:01.522 01:10:07 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:26:01.782 [2024-11-19 01:10:08.237838] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000029440/0x617000007c40) succeed. 00:26:01.782 [2024-11-19 01:10:08.248578] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x6120000295c0/0x617000007fc0) succeed. 00:26:01.782 [2024-11-19 01:10:08.248611] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:01.782 [2024-11-19 01:10:08.251680] iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/3071 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:26:01.782 [2024-11-19 01:10:08.251705] iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:26:01.782 [2024-11-19 01:10:08.253866] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:26:01.782 [2024-11-19 01:10:08.266124] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:26:01.782 nvme0n1 nvme0n2 00:26:01.782 nvme1n1 00:26:01.782 01:10:08 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:26:01.782 01:10:08 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:26:01.782 01:10:08 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 2218c665-ae39-43ff-8fc2-ba04e201c125 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:26:03.157 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2218c665ae3943ff8fc2ba04e201c125 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2218C665AE3943FF8FC2BA04E201C125 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 2218C665AE3943FF8FC2BA04E201C125 == \2\2\1\8\C\6\6\5\A\E\3\9\4\3\F\F\8\F\C\2\B\A\0\4\E\2\0\1\C\1\2\5 ]] 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid d30775bf-0e31-47e4-aed2-49cff3ce3558 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d30775bf0e3147e4aed249cff3ce3558 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D30775BF0E3147E4AED249CFF3CE3558 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ D30775BF0E3147E4AED249CFF3CE3558 == \D\3\0\7\7\5\B\F\0\E\3\1\4\7\E\4\A\E\D\2\4\9\C\F\F\3\C\E\3\5\5\8 ]] 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2b0cf8c3-09f3-4c63-8fd8-333adbd3f15a 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:26:03.415 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:26:03.416 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:26:03.416 01:10:09 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:03.416 01:10:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2b0cf8c309f34c638fd8333adbd3f15a 00:26:03.416 01:10:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2B0CF8C309F34C638FD8333ADBD3F15A 00:26:03.416 01:10:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2B0CF8C309F34C638FD8333ADBD3F15A == \2\B\0\C\F\8\C\3\0\9\F\3\4\C\6\3\8\F\D\8\3\3\3\A\D\B\D\3\F\1\5\A ]] 00:26:03.416 01:10:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 439184 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 439184 ']' 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 439184 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 439184 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 439184' 00:26:08.686 killing process with pid 439184 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 439184 00:26:08.686 01:10:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 439184 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:11.220 rmmod nvme_rdma 00:26:11.220 rmmod nvme_fabrics 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 439127 ']' 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 439127 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 439127 ']' 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 439127 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 439127 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 439127' 00:26:11.220 killing process with pid 439127 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 439127 00:26:11.220 01:10:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 439127 00:26:12.157 01:10:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:12.157 01:10:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:12.157 00:26:12.157 real 0m18.610s 00:26:12.157 user 0m24.722s 00:26:12.157 sys 0m5.932s 00:26:12.157 01:10:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.157 01:10:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:12.157 ************************************ 00:26:12.157 END TEST nvmf_nsid 00:26:12.157 ************************************ 00:26:12.158 01:10:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:12.158 00:26:12.158 real 15m5.722s 00:26:12.158 user 45m31.943s 00:26:12.158 sys 2m56.604s 00:26:12.158 01:10:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.158 01:10:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:12.158 ************************************ 00:26:12.158 END TEST nvmf_target_extra 00:26:12.158 ************************************ 00:26:12.158 01:10:18 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:26:12.158 01:10:18 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:12.158 01:10:18 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.158 01:10:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:12.158 ************************************ 00:26:12.158 START TEST nvmf_host 00:26:12.158 ************************************ 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:26:12.158 * Looking for test storage... 00:26:12.158 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:12.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.158 --rc genhtml_branch_coverage=1 00:26:12.158 --rc genhtml_function_coverage=1 00:26:12.158 --rc genhtml_legend=1 00:26:12.158 --rc geninfo_all_blocks=1 00:26:12.158 --rc geninfo_unexecuted_blocks=1 00:26:12.158 00:26:12.158 ' 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:12.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.158 --rc genhtml_branch_coverage=1 00:26:12.158 --rc genhtml_function_coverage=1 00:26:12.158 --rc genhtml_legend=1 00:26:12.158 --rc geninfo_all_blocks=1 00:26:12.158 --rc geninfo_unexecuted_blocks=1 00:26:12.158 00:26:12.158 ' 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:12.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.158 --rc genhtml_branch_coverage=1 00:26:12.158 --rc genhtml_function_coverage=1 00:26:12.158 --rc genhtml_legend=1 00:26:12.158 --rc geninfo_all_blocks=1 00:26:12.158 --rc geninfo_unexecuted_blocks=1 00:26:12.158 00:26:12.158 ' 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:12.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.158 --rc genhtml_branch_coverage=1 00:26:12.158 --rc genhtml_function_coverage=1 00:26:12.158 --rc genhtml_legend=1 00:26:12.158 --rc geninfo_all_blocks=1 00:26:12.158 --rc geninfo_unexecuted_blocks=1 00:26:12.158 00:26:12.158 ' 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.158 01:10:18 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:12.159 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.159 01:10:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.418 ************************************ 00:26:12.418 START TEST nvmf_multicontroller 00:26:12.418 ************************************ 00:26:12.418 01:10:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:26:12.418 * Looking for test storage... 00:26:12.419 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:26:12.419 01:10:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:12.419 01:10:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:26:12.419 01:10:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:12.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.419 --rc genhtml_branch_coverage=1 00:26:12.419 --rc genhtml_function_coverage=1 00:26:12.419 --rc genhtml_legend=1 00:26:12.419 --rc geninfo_all_blocks=1 00:26:12.419 --rc geninfo_unexecuted_blocks=1 00:26:12.419 00:26:12.419 ' 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:12.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.419 --rc genhtml_branch_coverage=1 00:26:12.419 --rc genhtml_function_coverage=1 00:26:12.419 --rc genhtml_legend=1 00:26:12.419 --rc geninfo_all_blocks=1 00:26:12.419 --rc geninfo_unexecuted_blocks=1 00:26:12.419 00:26:12.419 ' 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:12.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.419 --rc genhtml_branch_coverage=1 00:26:12.419 --rc genhtml_function_coverage=1 00:26:12.419 --rc genhtml_legend=1 00:26:12.419 --rc geninfo_all_blocks=1 00:26:12.419 --rc geninfo_unexecuted_blocks=1 00:26:12.419 00:26:12.419 ' 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:12.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.419 --rc genhtml_branch_coverage=1 00:26:12.419 --rc genhtml_function_coverage=1 00:26:12.419 --rc genhtml_legend=1 00:26:12.419 --rc geninfo_all_blocks=1 00:26:12.419 --rc geninfo_unexecuted_blocks=1 00:26:12.419 00:26:12.419 ' 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.419 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:12.420 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:26:12.420 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:26:12.420 00:26:12.420 real 0m0.208s 00:26:12.420 user 0m0.119s 00:26:12.420 sys 0m0.102s 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:12.420 ************************************ 00:26:12.420 END TEST nvmf_multicontroller 00:26:12.420 ************************************ 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.420 01:10:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.681 ************************************ 00:26:12.681 START TEST nvmf_aer 00:26:12.681 ************************************ 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:26:12.681 * Looking for test storage... 00:26:12.681 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:12.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.681 --rc genhtml_branch_coverage=1 00:26:12.681 --rc genhtml_function_coverage=1 00:26:12.681 --rc genhtml_legend=1 00:26:12.681 --rc geninfo_all_blocks=1 00:26:12.681 --rc geninfo_unexecuted_blocks=1 00:26:12.681 00:26:12.681 ' 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:12.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.681 --rc genhtml_branch_coverage=1 00:26:12.681 --rc genhtml_function_coverage=1 00:26:12.681 --rc genhtml_legend=1 00:26:12.681 --rc geninfo_all_blocks=1 00:26:12.681 --rc geninfo_unexecuted_blocks=1 00:26:12.681 00:26:12.681 ' 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:12.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.681 --rc genhtml_branch_coverage=1 00:26:12.681 --rc genhtml_function_coverage=1 00:26:12.681 --rc genhtml_legend=1 00:26:12.681 --rc geninfo_all_blocks=1 00:26:12.681 --rc geninfo_unexecuted_blocks=1 00:26:12.681 00:26:12.681 ' 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:12.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.681 --rc genhtml_branch_coverage=1 00:26:12.681 --rc genhtml_function_coverage=1 00:26:12.681 --rc genhtml_legend=1 00:26:12.681 --rc geninfo_all_blocks=1 00:26:12.681 --rc geninfo_unexecuted_blocks=1 00:26:12.681 00:26:12.681 ' 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.681 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:12.682 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:26:12.682 01:10:19 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.356 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:19.357 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:19.357 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@405 -- # modinfo irdma 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:19.357 Found net devices under 0000:af:00.0: cvl_0_0 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:19.357 Found net devices under 0000:af:00.1: cvl_0_1 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:19.357 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.358 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:26:19.358 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.358 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:19.358 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:19.358 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:19.358 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:19.358 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:26:19.358 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:19.358 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:19.358 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:19.358 01:10:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:26:19.358 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:19.358 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:26:19.358 altname enp175s0f0np0 00:26:19.358 altname ens801f0np0 00:26:19.358 inet 192.168.100.8/24 scope global cvl_0_0 00:26:19.358 valid_lft forever preferred_lft forever 00:26:19.358 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:26:19.358 valid_lft forever preferred_lft forever 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:26:19.358 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:19.358 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:26:19.358 altname enp175s0f1np1 00:26:19.358 altname ens801f1np1 00:26:19.358 inet 192.168.100.9/24 scope global cvl_0_1 00:26:19.358 valid_lft forever preferred_lft forever 00:26:19.358 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:26:19.358 valid_lft forever preferred_lft forever 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:19.358 192.168.100.9' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:19.358 192.168.100.9' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:19.358 192.168.100.9' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=444419 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 444419 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 444419 ']' 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:19.358 01:10:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.358 [2024-11-19 01:10:25.210170] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:19.359 [2024-11-19 01:10:25.210258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.359 [2024-11-19 01:10:25.335322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:19.359 [2024-11-19 01:10:25.449340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.359 [2024-11-19 01:10:25.449390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.359 [2024-11-19 01:10:25.449400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.359 [2024-11-19 01:10:25.449411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.359 [2024-11-19 01:10:25.449419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.359 [2024-11-19 01:10:25.451828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.359 [2024-11-19 01:10:25.451878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.359 [2024-11-19 01:10:25.451962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.359 [2024-11-19 01:10:25.451983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.641 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:19.641 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:19.641 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:19.641 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:19.641 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.641 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.641 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:19.641 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.641 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.641 [2024-11-19 01:10:26.090786] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:26:19.641 [2024-11-19 01:10:26.100499] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:26:19.641 [2024-11-19 01:10:26.100528] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:19.641 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.642 Malloc0 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.642 [2024-11-19 01:10:26.226428] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.642 [ 00:26:19.642 { 00:26:19.642 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:19.642 "subtype": "Discovery", 00:26:19.642 "listen_addresses": [], 00:26:19.642 "allow_any_host": true, 00:26:19.642 "hosts": [] 00:26:19.642 }, 00:26:19.642 { 00:26:19.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:19.642 "subtype": "NVMe", 00:26:19.642 "listen_addresses": [ 00:26:19.642 { 00:26:19.642 "trtype": "RDMA", 00:26:19.642 "adrfam": "IPv4", 00:26:19.642 "traddr": "192.168.100.8", 00:26:19.642 "trsvcid": "4420" 00:26:19.642 } 00:26:19.642 ], 00:26:19.642 "allow_any_host": true, 00:26:19.642 "hosts": [], 00:26:19.642 "serial_number": "SPDK00000000000001", 00:26:19.642 "model_number": "SPDK bdev Controller", 00:26:19.642 "max_namespaces": 2, 00:26:19.642 "min_cntlid": 1, 00:26:19.642 "max_cntlid": 65519, 00:26:19.642 "namespaces": [ 00:26:19.642 { 00:26:19.642 "nsid": 1, 00:26:19.642 "bdev_name": "Malloc0", 00:26:19.642 "name": "Malloc0", 00:26:19.642 "nguid": "6F4B560F0D7F468480E95715786A2829", 00:26:19.642 "uuid": "6f4b560f-0d7f-4684-80e9-5715786a2829" 00:26:19.642 } 00:26:19.642 ] 00:26:19.642 } 00:26:19.642 ] 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=444666 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:19.642 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.926 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.203 Malloc1 00:26:20.203 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.203 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:20.203 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.203 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.203 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.203 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:20.203 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.203 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.203 [ 00:26:20.203 { 00:26:20.203 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:20.203 "subtype": "Discovery", 00:26:20.203 "listen_addresses": [], 00:26:20.203 "allow_any_host": true, 00:26:20.203 "hosts": [] 00:26:20.203 }, 00:26:20.203 { 00:26:20.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.203 "subtype": "NVMe", 00:26:20.203 "listen_addresses": [ 00:26:20.203 { 00:26:20.203 "trtype": "RDMA", 00:26:20.203 "adrfam": "IPv4", 00:26:20.203 "traddr": "192.168.100.8", 00:26:20.203 "trsvcid": "4420" 00:26:20.203 } 00:26:20.203 ], 00:26:20.203 "allow_any_host": true, 00:26:20.203 "hosts": [], 00:26:20.203 "serial_number": "SPDK00000000000001", 00:26:20.203 "model_number": "SPDK bdev Controller", 00:26:20.203 "max_namespaces": 2, 00:26:20.203 "min_cntlid": 1, 00:26:20.203 "max_cntlid": 65519, 00:26:20.203 "namespaces": [ 00:26:20.203 { 00:26:20.203 "nsid": 1, 00:26:20.203 "bdev_name": "Malloc0", 00:26:20.203 "name": "Malloc0", 00:26:20.203 "nguid": "6F4B560F0D7F468480E95715786A2829", 00:26:20.203 "uuid": "6f4b560f-0d7f-4684-80e9-5715786a2829" 00:26:20.203 }, 00:26:20.203 { 00:26:20.203 "nsid": 2, 00:26:20.203 "bdev_name": "Malloc1", 00:26:20.203 "name": "Malloc1", 00:26:20.203 "nguid": "3D8CD0E32A9E40E1BD4F4034F6DFC005", 00:26:20.203 "uuid": "3d8cd0e3-2a9e-40e1-bd4f-4034f6dfc005" 00:26:20.203 } 00:26:20.203 ] 00:26:20.203 } 00:26:20.203 ] 00:26:20.204 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.204 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 444666 00:26:20.204 Asynchronous Event Request test 00:26:20.204 Attaching to 192.168.100.8 00:26:20.204 Attached to 192.168.100.8 00:26:20.204 Registering asynchronous event callbacks... 00:26:20.204 Starting namespace attribute notice tests for all controllers... 00:26:20.204 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:20.204 aer_cb - Changed Namespace 00:26:20.204 Cleaning up... 00:26:20.204 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:20.204 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.204 01:10:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.480 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.480 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:20.480 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.480 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:20.760 rmmod nvme_rdma 00:26:20.760 rmmod nvme_fabrics 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 444419 ']' 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 444419 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 444419 ']' 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 444419 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 444419 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 444419' 00:26:20.760 killing process with pid 444419 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 444419 00:26:20.760 01:10:27 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 444419 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:22.220 00:26:22.220 real 0m9.415s 00:26:22.220 user 0m13.257s 00:26:22.220 sys 0m4.942s 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:22.220 ************************************ 00:26:22.220 END TEST nvmf_aer 00:26:22.220 ************************************ 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.220 ************************************ 00:26:22.220 START TEST nvmf_async_init 00:26:22.220 ************************************ 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:26:22.220 * Looking for test storage... 00:26:22.220 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.220 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:22.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.221 --rc genhtml_branch_coverage=1 00:26:22.221 --rc genhtml_function_coverage=1 00:26:22.221 --rc genhtml_legend=1 00:26:22.221 --rc geninfo_all_blocks=1 00:26:22.221 --rc geninfo_unexecuted_blocks=1 00:26:22.221 00:26:22.221 ' 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:22.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.221 --rc genhtml_branch_coverage=1 00:26:22.221 --rc genhtml_function_coverage=1 00:26:22.221 --rc genhtml_legend=1 00:26:22.221 --rc geninfo_all_blocks=1 00:26:22.221 --rc geninfo_unexecuted_blocks=1 00:26:22.221 00:26:22.221 ' 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:22.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.221 --rc genhtml_branch_coverage=1 00:26:22.221 --rc genhtml_function_coverage=1 00:26:22.221 --rc genhtml_legend=1 00:26:22.221 --rc geninfo_all_blocks=1 00:26:22.221 --rc geninfo_unexecuted_blocks=1 00:26:22.221 00:26:22.221 ' 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:22.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.221 --rc genhtml_branch_coverage=1 00:26:22.221 --rc genhtml_function_coverage=1 00:26:22.221 --rc genhtml_legend=1 00:26:22.221 --rc geninfo_all_blocks=1 00:26:22.221 --rc geninfo_unexecuted_blocks=1 00:26:22.221 00:26:22.221 ' 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:22.221 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9becc242dcb446028259586103be4918 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:22.221 01:10:28 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:28.882 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:28.882 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@405 -- # modinfo irdma 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:28.882 Found net devices under 0000:af:00.0: cvl_0_0 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:28.882 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:28.883 Found net devices under 0000:af:00.1: cvl_0_1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:26:28.883 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:28.883 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:26:28.883 altname enp175s0f0np0 00:26:28.883 altname ens801f0np0 00:26:28.883 inet 192.168.100.8/24 scope global cvl_0_0 00:26:28.883 valid_lft forever preferred_lft forever 00:26:28.883 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:26:28.883 valid_lft forever preferred_lft forever 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:26:28.883 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:28.883 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:26:28.883 altname enp175s0f1np1 00:26:28.883 altname ens801f1np1 00:26:28.883 inet 192.168.100.9/24 scope global cvl_0_1 00:26:28.883 valid_lft forever preferred_lft forever 00:26:28.883 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:26:28.883 valid_lft forever preferred_lft forever 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:28.883 192.168.100.9' 00:26:28.883 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:28.884 192.168.100.9' 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:28.884 192.168.100.9' 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=448159 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 448159 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 448159 ']' 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.884 01:10:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:28.884 [2024-11-19 01:10:34.732968] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:28.884 [2024-11-19 01:10:34.733060] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.884 [2024-11-19 01:10:34.860514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.884 [2024-11-19 01:10:34.969065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.884 [2024-11-19 01:10:34.969112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.884 [2024-11-19 01:10:34.969122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.884 [2024-11-19 01:10:34.969132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.884 [2024-11-19 01:10:34.969139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.884 [2024-11-19 01:10:34.970406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.884 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.884 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:26:28.884 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:28.884 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:28.884 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:28.884 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.884 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:28.884 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.884 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 [2024-11-19 01:10:35.590947] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000289c0/0x617000007fc0) succeed. 00:26:29.144 [2024-11-19 01:10:35.600325] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000028b40/0x617000008340) succeed. 00:26:29.144 [2024-11-19 01:10:35.600353] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 null0 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9becc242dcb446028259586103be4918 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 [2024-11-19 01:10:35.650166] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 nvme0n1 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.144 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.144 [ 00:26:29.144 { 00:26:29.144 "name": "nvme0n1", 00:26:29.144 "aliases": [ 00:26:29.144 "9becc242-dcb4-4602-8259-586103be4918" 00:26:29.144 ], 00:26:29.144 "product_name": "NVMe disk", 00:26:29.144 "block_size": 512, 00:26:29.144 "num_blocks": 2097152, 00:26:29.144 "uuid": "9becc242-dcb4-4602-8259-586103be4918", 00:26:29.144 "numa_id": 1, 00:26:29.144 "assigned_rate_limits": { 00:26:29.144 "rw_ios_per_sec": 0, 00:26:29.144 "rw_mbytes_per_sec": 0, 00:26:29.144 "r_mbytes_per_sec": 0, 00:26:29.144 "w_mbytes_per_sec": 0 00:26:29.144 }, 00:26:29.144 "claimed": false, 00:26:29.144 "zoned": false, 00:26:29.144 "supported_io_types": { 00:26:29.144 "read": true, 00:26:29.144 "write": true, 00:26:29.144 "unmap": false, 00:26:29.144 "flush": true, 00:26:29.144 "reset": true, 00:26:29.144 "nvme_admin": true, 00:26:29.144 "nvme_io": true, 00:26:29.144 "nvme_io_md": false, 00:26:29.144 "write_zeroes": true, 00:26:29.144 "zcopy": false, 00:26:29.144 "get_zone_info": false, 00:26:29.144 "zone_management": false, 00:26:29.144 "zone_append": false, 00:26:29.144 "compare": true, 00:26:29.144 "compare_and_write": true, 00:26:29.144 "abort": true, 00:26:29.144 "seek_hole": false, 00:26:29.144 "seek_data": false, 00:26:29.144 "copy": true, 00:26:29.144 "nvme_iov_md": false 00:26:29.144 }, 00:26:29.144 "memory_domains": [ 00:26:29.144 { 00:26:29.144 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:29.144 "dma_device_type": 0 00:26:29.144 } 00:26:29.144 ], 00:26:29.144 "driver_specific": { 00:26:29.144 "nvme": [ 00:26:29.144 { 00:26:29.144 "trid": { 00:26:29.144 "trtype": "RDMA", 00:26:29.144 "adrfam": "IPv4", 00:26:29.144 "traddr": "192.168.100.8", 00:26:29.144 "trsvcid": "4420", 00:26:29.144 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:29.145 }, 00:26:29.145 "ctrlr_data": { 00:26:29.145 "cntlid": 1, 00:26:29.145 "vendor_id": "0x8086", 00:26:29.145 "model_number": "SPDK bdev Controller", 00:26:29.145 "serial_number": "00000000000000000000", 00:26:29.145 "firmware_revision": "25.01", 00:26:29.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.145 "oacs": { 00:26:29.145 "security": 0, 00:26:29.145 "format": 0, 00:26:29.145 "firmware": 0, 00:26:29.145 "ns_manage": 0 00:26:29.145 }, 00:26:29.145 "multi_ctrlr": true, 00:26:29.145 "ana_reporting": false 00:26:29.145 }, 00:26:29.145 "vs": { 00:26:29.145 "nvme_version": "1.3" 00:26:29.145 }, 00:26:29.145 "ns_data": { 00:26:29.145 "id": 1, 00:26:29.145 "can_share": true 00:26:29.145 } 00:26:29.145 } 00:26:29.145 ], 00:26:29.145 "mp_policy": "active_passive" 00:26:29.145 } 00:26:29.145 } 00:26:29.145 ] 00:26:29.145 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.145 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:29.145 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.145 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.145 [2024-11-19 01:10:35.768808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:29.145 [2024-11-19 01:10:35.811558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:29.145 [2024-11-19 01:10:35.835922] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:29.404 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.404 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:29.404 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.404 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.404 [ 00:26:29.404 { 00:26:29.404 "name": "nvme0n1", 00:26:29.404 "aliases": [ 00:26:29.404 "9becc242-dcb4-4602-8259-586103be4918" 00:26:29.404 ], 00:26:29.404 "product_name": "NVMe disk", 00:26:29.404 "block_size": 512, 00:26:29.404 "num_blocks": 2097152, 00:26:29.404 "uuid": "9becc242-dcb4-4602-8259-586103be4918", 00:26:29.404 "numa_id": 1, 00:26:29.404 "assigned_rate_limits": { 00:26:29.404 "rw_ios_per_sec": 0, 00:26:29.404 "rw_mbytes_per_sec": 0, 00:26:29.404 "r_mbytes_per_sec": 0, 00:26:29.404 "w_mbytes_per_sec": 0 00:26:29.404 }, 00:26:29.404 "claimed": false, 00:26:29.404 "zoned": false, 00:26:29.404 "supported_io_types": { 00:26:29.404 "read": true, 00:26:29.404 "write": true, 00:26:29.404 "unmap": false, 00:26:29.404 "flush": true, 00:26:29.404 "reset": true, 00:26:29.404 "nvme_admin": true, 00:26:29.404 "nvme_io": true, 00:26:29.404 "nvme_io_md": false, 00:26:29.404 "write_zeroes": true, 00:26:29.404 "zcopy": false, 00:26:29.404 "get_zone_info": false, 00:26:29.404 "zone_management": false, 00:26:29.404 "zone_append": false, 00:26:29.404 "compare": true, 00:26:29.404 "compare_and_write": true, 00:26:29.404 "abort": true, 00:26:29.404 "seek_hole": false, 00:26:29.404 "seek_data": false, 00:26:29.404 "copy": true, 00:26:29.404 "nvme_iov_md": false 00:26:29.404 }, 00:26:29.404 "memory_domains": [ 00:26:29.404 { 00:26:29.404 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:29.404 "dma_device_type": 0 00:26:29.404 } 00:26:29.404 ], 00:26:29.404 "driver_specific": { 00:26:29.404 "nvme": [ 00:26:29.404 { 00:26:29.404 "trid": { 00:26:29.404 "trtype": "RDMA", 00:26:29.404 "adrfam": "IPv4", 00:26:29.404 "traddr": "192.168.100.8", 00:26:29.404 "trsvcid": "4420", 00:26:29.404 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:29.404 }, 00:26:29.404 "ctrlr_data": { 00:26:29.404 "cntlid": 2, 00:26:29.404 "vendor_id": "0x8086", 00:26:29.404 "model_number": "SPDK bdev Controller", 00:26:29.404 "serial_number": "00000000000000000000", 00:26:29.404 "firmware_revision": "25.01", 00:26:29.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.404 "oacs": { 00:26:29.404 "security": 0, 00:26:29.404 "format": 0, 00:26:29.404 "firmware": 0, 00:26:29.404 "ns_manage": 0 00:26:29.404 }, 00:26:29.404 "multi_ctrlr": true, 00:26:29.404 "ana_reporting": false 00:26:29.404 }, 00:26:29.404 "vs": { 00:26:29.405 "nvme_version": "1.3" 00:26:29.405 }, 00:26:29.405 "ns_data": { 00:26:29.405 "id": 1, 00:26:29.405 "can_share": true 00:26:29.405 } 00:26:29.405 } 00:26:29.405 ], 00:26:29.405 "mp_policy": "active_passive" 00:26:29.405 } 00:26:29.405 } 00:26:29.405 ] 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.afyZLvu4Wd 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.afyZLvu4Wd 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.afyZLvu4Wd 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.405 [2024-11-19 01:10:35.937135] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.405 01:10:35 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.405 [2024-11-19 01:10:35.957180] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:29.405 nvme0n1 00:26:29.405 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.405 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:29.405 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.405 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.405 [ 00:26:29.405 { 00:26:29.405 "name": "nvme0n1", 00:26:29.405 "aliases": [ 00:26:29.405 "9becc242-dcb4-4602-8259-586103be4918" 00:26:29.405 ], 00:26:29.405 "product_name": "NVMe disk", 00:26:29.405 "block_size": 512, 00:26:29.405 "num_blocks": 2097152, 00:26:29.405 "uuid": "9becc242-dcb4-4602-8259-586103be4918", 00:26:29.405 "numa_id": 1, 00:26:29.405 "assigned_rate_limits": { 00:26:29.405 "rw_ios_per_sec": 0, 00:26:29.405 "rw_mbytes_per_sec": 0, 00:26:29.405 "r_mbytes_per_sec": 0, 00:26:29.405 "w_mbytes_per_sec": 0 00:26:29.405 }, 00:26:29.405 "claimed": false, 00:26:29.405 "zoned": false, 00:26:29.405 "supported_io_types": { 00:26:29.405 "read": true, 00:26:29.405 "write": true, 00:26:29.405 "unmap": false, 00:26:29.405 "flush": true, 00:26:29.405 "reset": true, 00:26:29.405 "nvme_admin": true, 00:26:29.405 "nvme_io": true, 00:26:29.405 "nvme_io_md": false, 00:26:29.405 "write_zeroes": true, 00:26:29.405 "zcopy": false, 00:26:29.405 "get_zone_info": false, 00:26:29.405 "zone_management": false, 00:26:29.405 "zone_append": false, 00:26:29.405 "compare": true, 00:26:29.405 "compare_and_write": true, 00:26:29.405 "abort": true, 00:26:29.405 "seek_hole": false, 00:26:29.405 "seek_data": false, 00:26:29.405 "copy": true, 00:26:29.405 "nvme_iov_md": false 00:26:29.405 }, 00:26:29.405 "memory_domains": [ 00:26:29.405 { 00:26:29.405 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:29.405 "dma_device_type": 0 00:26:29.405 } 00:26:29.405 ], 00:26:29.405 "driver_specific": { 00:26:29.405 "nvme": [ 00:26:29.405 { 00:26:29.405 "trid": { 00:26:29.405 "trtype": "RDMA", 00:26:29.405 "adrfam": "IPv4", 00:26:29.405 "traddr": "192.168.100.8", 00:26:29.405 "trsvcid": "4421", 00:26:29.405 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:29.405 }, 00:26:29.405 "ctrlr_data": { 00:26:29.405 "cntlid": 3, 00:26:29.405 "vendor_id": "0x8086", 00:26:29.405 "model_number": "SPDK bdev Controller", 00:26:29.405 "serial_number": "00000000000000000000", 00:26:29.405 "firmware_revision": "25.01", 00:26:29.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.405 "oacs": { 00:26:29.405 "security": 0, 00:26:29.405 "format": 0, 00:26:29.405 "firmware": 0, 00:26:29.405 "ns_manage": 0 00:26:29.405 }, 00:26:29.405 "multi_ctrlr": true, 00:26:29.405 "ana_reporting": false 00:26:29.405 }, 00:26:29.405 "vs": { 00:26:29.405 "nvme_version": "1.3" 00:26:29.405 }, 00:26:29.405 "ns_data": { 00:26:29.405 "id": 1, 00:26:29.405 "can_share": true 00:26:29.405 } 00:26:29.405 } 00:26:29.405 ], 00:26:29.405 "mp_policy": "active_passive" 00:26:29.405 } 00:26:29.405 } 00:26:29.405 ] 00:26:29.405 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.405 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.405 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.405 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.afyZLvu4Wd 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:29.665 rmmod nvme_rdma 00:26:29.665 rmmod nvme_fabrics 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 448159 ']' 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 448159 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 448159 ']' 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 448159 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 448159 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 448159' 00:26:29.665 killing process with pid 448159 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 448159 00:26:29.665 01:10:36 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 448159 00:26:30.602 01:10:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.602 01:10:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:30.602 00:26:30.602 real 0m8.468s 00:26:30.602 user 0m4.279s 00:26:30.602 sys 0m4.897s 00:26:30.602 01:10:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.602 01:10:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:30.602 ************************************ 00:26:30.602 END TEST nvmf_async_init 00:26:30.602 ************************************ 00:26:30.602 01:10:37 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:30.602 01:10:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:30.602 01:10:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.602 01:10:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.602 ************************************ 00:26:30.602 START TEST dma 00:26:30.602 ************************************ 00:26:30.602 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:30.602 * Looking for test storage... 00:26:30.602 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:26:30.602 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:30.603 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:26:30.603 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:30.862 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:30.862 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:30.862 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:30.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.863 --rc genhtml_branch_coverage=1 00:26:30.863 --rc genhtml_function_coverage=1 00:26:30.863 --rc genhtml_legend=1 00:26:30.863 --rc geninfo_all_blocks=1 00:26:30.863 --rc geninfo_unexecuted_blocks=1 00:26:30.863 00:26:30.863 ' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:30.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.863 --rc genhtml_branch_coverage=1 00:26:30.863 --rc genhtml_function_coverage=1 00:26:30.863 --rc genhtml_legend=1 00:26:30.863 --rc geninfo_all_blocks=1 00:26:30.863 --rc geninfo_unexecuted_blocks=1 00:26:30.863 00:26:30.863 ' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:30.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.863 --rc genhtml_branch_coverage=1 00:26:30.863 --rc genhtml_function_coverage=1 00:26:30.863 --rc genhtml_legend=1 00:26:30.863 --rc geninfo_all_blocks=1 00:26:30.863 --rc geninfo_unexecuted_blocks=1 00:26:30.863 00:26:30.863 ' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:30.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.863 --rc genhtml_branch_coverage=1 00:26:30.863 --rc genhtml_function_coverage=1 00:26:30.863 --rc genhtml_legend=1 00:26:30.863 --rc geninfo_all_blocks=1 00:26:30.863 --rc geninfo_unexecuted_blocks=1 00:26:30.863 00:26:30.863 ' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:30.863 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:30.863 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.864 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:30.864 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:30.864 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:30.864 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.864 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.864 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.864 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:30.864 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:30.864 01:10:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:26:30.864 01:10:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:37.436 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:37.436 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@405 -- # modinfo irdma 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.436 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:37.437 Found net devices under 0000:af:00.0: cvl_0_0 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:37.437 Found net devices under 0000:af:00.1: cvl_0_1 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:37.437 01:10:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:26:37.437 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:37.437 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:26:37.437 altname enp175s0f0np0 00:26:37.437 altname ens801f0np0 00:26:37.437 inet 192.168.100.8/24 scope global cvl_0_0 00:26:37.437 valid_lft forever preferred_lft forever 00:26:37.437 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:26:37.437 valid_lft forever preferred_lft forever 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:26:37.437 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:26:37.437 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:26:37.437 altname enp175s0f1np1 00:26:37.437 altname ens801f1np1 00:26:37.437 inet 192.168.100.9/24 scope global cvl_0_1 00:26:37.437 valid_lft forever preferred_lft forever 00:26:37.437 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:26:37.437 valid_lft forever preferred_lft forever 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo cvl_0_0 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo cvl_0_1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:37.437 192.168.100.9' 00:26:37.437 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:37.437 192.168.100.9' 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:37.438 192.168.100.9' 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=451589 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 451589 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 451589 ']' 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.438 01:10:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:37.438 [2024-11-19 01:10:43.241570] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:37.438 [2024-11-19 01:10:43.241671] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.438 [2024-11-19 01:10:43.366483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:37.438 [2024-11-19 01:10:43.472406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.438 [2024-11-19 01:10:43.472453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.438 [2024-11-19 01:10:43.472463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.438 [2024-11-19 01:10:43.472473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.438 [2024-11-19 01:10:43.472480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.438 [2024-11-19 01:10:43.474620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.438 [2024-11-19 01:10:43.474641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.438 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.438 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:26:37.438 01:10:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:37.438 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:37.438 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:37.438 01:10:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.438 01:10:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:37.438 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.438 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:37.438 [2024-11-19 01:10:44.112391] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000028cc0/0x617000007c40) succeed. 00:26:37.438 [2024-11-19 01:10:44.121725] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000028e40/0x617000007fc0) succeed. 00:26:37.438 [2024-11-19 01:10:44.121752] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:37.438 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.697 01:10:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:26:37.697 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.697 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:37.957 Malloc0 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:37.957 [2024-11-19 01:10:44.425934] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:37.957 { 00:26:37.957 "params": { 00:26:37.957 "name": "Nvme$subsystem", 00:26:37.957 "trtype": "$TEST_TRANSPORT", 00:26:37.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.957 "adrfam": "ipv4", 00:26:37.957 "trsvcid": "$NVMF_PORT", 00:26:37.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.957 "hdgst": ${hdgst:-false}, 00:26:37.957 "ddgst": ${ddgst:-false} 00:26:37.957 }, 00:26:37.957 "method": "bdev_nvme_attach_controller" 00:26:37.957 } 00:26:37.957 EOF 00:26:37.957 )") 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:26:37.957 01:10:44 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:37.957 "params": { 00:26:37.957 "name": "Nvme0", 00:26:37.957 "trtype": "rdma", 00:26:37.957 "traddr": "192.168.100.8", 00:26:37.957 "adrfam": "ipv4", 00:26:37.957 "trsvcid": "4420", 00:26:37.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:37.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:37.957 "hdgst": false, 00:26:37.957 "ddgst": false 00:26:37.957 }, 00:26:37.957 "method": "bdev_nvme_attach_controller" 00:26:37.957 }' 00:26:37.957 [2024-11-19 01:10:44.501536] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:37.957 [2024-11-19 01:10:44.501624] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451746 ] 00:26:37.957 [2024-11-19 01:10:44.626774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:38.217 [2024-11-19 01:10:44.735692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:38.217 [2024-11-19 01:10:44.735713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.785 bdev Nvme0n1 reports 1 memory domains 00:26:44.785 bdev Nvme0n1 supports RDMA memory domain 00:26:44.785 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:44.785 ========================================================================== 00:26:44.785 Latency [us] 00:26:44.785 IOPS MiB/s Average min max 00:26:44.785 Core 2: 18956.24 74.05 843.36 312.01 15491.90 00:26:44.785 Core 3: 18617.11 72.72 858.66 308.22 15464.27 00:26:44.785 ========================================================================== 00:26:44.785 Total : 37573.35 146.77 850.94 308.22 15491.90 00:26:44.785 00:26:44.785 Total operations: 187902, translate 187902 pull_push 0 memzero 0 00:26:44.785 01:10:51 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:26:44.785 01:10:51 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:26:44.785 01:10:51 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:26:44.785 [2024-11-19 01:10:51.180068] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:44.785 [2024-11-19 01:10:51.180151] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452848 ] 00:26:44.785 [2024-11-19 01:10:51.303031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:44.785 [2024-11-19 01:10:51.416831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.785 [2024-11-19 01:10:51.416844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.903 bdev Malloc0 reports 2 memory domains 00:26:52.903 bdev Malloc0 doesn't support RDMA memory domain 00:26:52.903 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:52.903 ========================================================================== 00:26:52.903 Latency [us] 00:26:52.903 IOPS MiB/s Average min max 00:26:52.903 Core 2: 12399.94 48.44 1289.45 444.60 1747.56 00:26:52.903 Core 3: 12301.19 48.05 1299.76 454.56 1602.23 00:26:52.903 ========================================================================== 00:26:52.903 Total : 24701.13 96.49 1294.58 444.60 1747.56 00:26:52.903 00:26:52.903 Total operations: 123562, translate 0 pull_push 494248 memzero 0 00:26:52.903 01:10:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:26:52.903 01:10:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:26:52.903 01:10:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:26:52.903 01:10:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:26:52.903 Ignoring -M option 00:26:52.903 [2024-11-19 01:10:58.230977] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:52.903 [2024-11-19 01:10:58.231071] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453974 ] 00:26:52.903 [2024-11-19 01:10:58.357570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:52.903 [2024-11-19 01:10:58.470907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.903 [2024-11-19 01:10:58.470921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.468 bdev 9c810ca1-6093-4da7-a898-4db97d421712 reports 1 memory domains 00:26:59.468 bdev 9c810ca1-6093-4da7-a898-4db97d421712 supports RDMA memory domain 00:26:59.468 Initialization complete, running randread IO for 5 sec on 2 cores 00:26:59.468 ========================================================================== 00:26:59.468 Latency [us] 00:26:59.468 IOPS MiB/s Average min max 00:26:59.468 Core 2: 64317.45 251.24 247.84 92.03 3732.89 00:26:59.468 Core 3: 64349.44 251.37 247.71 90.79 3585.21 00:26:59.468 ========================================================================== 00:26:59.468 Total : 128666.89 502.61 247.78 90.79 3732.89 00:26:59.468 00:26:59.468 Total operations: 643424, translate 0 pull_push 0 memzero 643424 00:26:59.468 01:11:04 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:26:59.468 [2024-11-19 01:11:05.068156] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:00.847 Initializing NVMe Controllers 00:27:00.847 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:27:00.847 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:00.847 Initialization complete. Launching workers. 00:27:00.847 ======================================================== 00:27:00.847 Latency(us) 00:27:00.847 Device Information : IOPS MiB/s Average min max 00:27:00.847 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 1982.74 7.75 8124.60 3992.68 15961.31 00:27:00.847 ======================================================== 00:27:00.847 Total : 1982.74 7.75 8124.60 3992.68 15961.31 00:27:00.847 00:27:00.847 01:11:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:27:00.847 01:11:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:27:00.847 01:11:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:27:00.847 01:11:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:27:00.847 [2024-11-19 01:11:07.529067] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:00.847 [2024-11-19 01:11:07.529161] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455466 ] 00:27:01.107 [2024-11-19 01:11:07.654635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:01.107 [2024-11-19 01:11:07.765032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.107 [2024-11-19 01:11:07.765047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.675 bdev 8b84bb41-c636-43a4-a67d-59c7bf76c385 reports 1 memory domains 00:27:07.675 bdev 8b84bb41-c636-43a4-a67d-59c7bf76c385 supports RDMA memory domain 00:27:07.675 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:07.675 ========================================================================== 00:27:07.675 Latency [us] 00:27:07.675 IOPS MiB/s Average min max 00:27:07.675 Core 2: 17727.31 69.25 901.93 19.40 11634.05 00:27:07.675 Core 3: 17868.08 69.80 894.75 14.98 11524.47 00:27:07.675 ========================================================================== 00:27:07.675 Total : 35595.39 139.04 898.32 14.98 11634.05 00:27:07.675 00:27:07.675 Total operations: 178012, translate 177898 pull_push 0 memzero 114 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:07.675 rmmod nvme_rdma 00:27:07.675 rmmod nvme_fabrics 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 451589 ']' 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 451589 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 451589 ']' 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 451589 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 451589 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 451589' 00:27:07.675 killing process with pid 451589 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 451589 00:27:07.675 01:11:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 451589 00:27:09.581 01:11:16 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:09.581 01:11:16 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:09.581 00:27:09.581 real 0m39.010s 00:27:09.581 user 1m57.664s 00:27:09.581 sys 0m6.145s 00:27:09.581 01:11:16 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.581 01:11:16 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:09.582 ************************************ 00:27:09.582 END TEST dma 00:27:09.582 ************************************ 00:27:09.582 01:11:16 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:09.582 01:11:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:09.582 01:11:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.582 01:11:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.582 ************************************ 00:27:09.582 START TEST nvmf_identify 00:27:09.582 ************************************ 00:27:09.582 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:09.841 * Looking for test storage... 00:27:09.841 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:27:09.841 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:09.841 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:27:09.841 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:09.841 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:09.841 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:09.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.842 --rc genhtml_branch_coverage=1 00:27:09.842 --rc genhtml_function_coverage=1 00:27:09.842 --rc genhtml_legend=1 00:27:09.842 --rc geninfo_all_blocks=1 00:27:09.842 --rc geninfo_unexecuted_blocks=1 00:27:09.842 00:27:09.842 ' 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:09.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.842 --rc genhtml_branch_coverage=1 00:27:09.842 --rc genhtml_function_coverage=1 00:27:09.842 --rc genhtml_legend=1 00:27:09.842 --rc geninfo_all_blocks=1 00:27:09.842 --rc geninfo_unexecuted_blocks=1 00:27:09.842 00:27:09.842 ' 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:09.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.842 --rc genhtml_branch_coverage=1 00:27:09.842 --rc genhtml_function_coverage=1 00:27:09.842 --rc genhtml_legend=1 00:27:09.842 --rc geninfo_all_blocks=1 00:27:09.842 --rc geninfo_unexecuted_blocks=1 00:27:09.842 00:27:09.842 ' 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:09.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.842 --rc genhtml_branch_coverage=1 00:27:09.842 --rc genhtml_function_coverage=1 00:27:09.842 --rc genhtml_legend=1 00:27:09.842 --rc geninfo_all_blocks=1 00:27:09.842 --rc geninfo_unexecuted_blocks=1 00:27:09.842 00:27:09.842 ' 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.842 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:09.843 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:09.843 01:11:16 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:16.415 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:16.415 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@405 -- # modinfo irdma 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:16.415 Found net devices under 0000:af:00.0: cvl_0_0 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.415 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:16.416 Found net devices under 0000:af:00.1: cvl_0_1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo cvl_0_0 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo cvl_0_1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:27:16.416 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:16.416 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:27:16.416 altname enp175s0f0np0 00:27:16.416 altname ens801f0np0 00:27:16.416 inet 192.168.100.8/24 scope global cvl_0_0 00:27:16.416 valid_lft forever preferred_lft forever 00:27:16.416 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:27:16.416 valid_lft forever preferred_lft forever 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:27:16.416 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:16.416 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:27:16.416 altname enp175s0f1np1 00:27:16.416 altname ens801f1np1 00:27:16.416 inet 192.168.100.9/24 scope global cvl_0_1 00:27:16.416 valid_lft forever preferred_lft forever 00:27:16.416 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:27:16.416 valid_lft forever preferred_lft forever 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo cvl_0_0 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo cvl_0_1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:16.416 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:16.416 192.168.100.9' 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:16.417 192.168.100.9' 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:16.417 192.168.100.9' 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=459950 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 459950 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 459950 ']' 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.417 01:11:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.417 [2024-11-19 01:11:22.344877] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:16.417 [2024-11-19 01:11:22.344966] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.417 [2024-11-19 01:11:22.468115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.417 [2024-11-19 01:11:22.578421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.417 [2024-11-19 01:11:22.578469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.417 [2024-11-19 01:11:22.578481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.417 [2024-11-19 01:11:22.578508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.417 [2024-11-19 01:11:22.578517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.417 [2024-11-19 01:11:22.581152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.417 [2024-11-19 01:11:22.581232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.417 [2024-11-19 01:11:22.581331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.417 [2024-11-19 01:11:22.581351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.676 [2024-11-19 01:11:23.186849] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:27:16.676 [2024-11-19 01:11:23.196464] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:27:16.676 [2024-11-19 01:11:23.196494] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.676 Malloc0 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.676 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.676 [2024-11-19 01:11:23.364748] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:16.939 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.939 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:16.939 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.939 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.940 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.940 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:16.940 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.940 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.940 [ 00:27:16.940 { 00:27:16.940 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:16.940 "subtype": "Discovery", 00:27:16.940 "listen_addresses": [ 00:27:16.940 { 00:27:16.940 "trtype": "RDMA", 00:27:16.940 "adrfam": "IPv4", 00:27:16.940 "traddr": "192.168.100.8", 00:27:16.940 "trsvcid": "4420" 00:27:16.940 } 00:27:16.940 ], 00:27:16.940 "allow_any_host": true, 00:27:16.940 "hosts": [] 00:27:16.940 }, 00:27:16.940 { 00:27:16.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.940 "subtype": "NVMe", 00:27:16.940 "listen_addresses": [ 00:27:16.940 { 00:27:16.940 "trtype": "RDMA", 00:27:16.940 "adrfam": "IPv4", 00:27:16.940 "traddr": "192.168.100.8", 00:27:16.940 "trsvcid": "4420" 00:27:16.940 } 00:27:16.940 ], 00:27:16.940 "allow_any_host": true, 00:27:16.940 "hosts": [], 00:27:16.940 "serial_number": "SPDK00000000000001", 00:27:16.940 "model_number": "SPDK bdev Controller", 00:27:16.940 "max_namespaces": 32, 00:27:16.940 "min_cntlid": 1, 00:27:16.940 "max_cntlid": 65519, 00:27:16.940 "namespaces": [ 00:27:16.940 { 00:27:16.940 "nsid": 1, 00:27:16.940 "bdev_name": "Malloc0", 00:27:16.940 "name": "Malloc0", 00:27:16.940 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:16.940 "eui64": "ABCDEF0123456789", 00:27:16.940 "uuid": "e8dbadaa-f720-4954-b791-0cbafdcd0e82" 00:27:16.940 } 00:27:16.940 ] 00:27:16.940 } 00:27:16.940 ] 00:27:16.940 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.940 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:16.940 [2024-11-19 01:11:23.436435] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:16.940 [2024-11-19 01:11:23.436505] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460192 ] 00:27:16.940 [2024-11-19 01:11:23.504550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:27:16.940 [2024-11-19 01:11:23.504653] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:27:16.940 [2024-11-19 01:11:23.504674] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:27:16.940 [2024-11-19 01:11:23.504681] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:27:16.940 [2024-11-19 01:11:23.504726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:27:16.940 [2024-11-19 01:11:23.521663] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:27:16.940 [2024-11-19 01:11:23.537169] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:16.940 [2024-11-19 01:11:23.537189] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:27:16.940 [2024-11-19 01:11:23.537205] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537214] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537224] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537231] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537239] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537246] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537254] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537263] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537275] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537282] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537289] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537300] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537308] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537315] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537322] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537329] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537336] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537343] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537352] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537358] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537366] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537376] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537384] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537390] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537399] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537406] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537413] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537420] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537427] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537434] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537442] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537448] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:27:16.940 [2024-11-19 01:11:23.537457] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:16.940 [2024-11-19 01:11:23.537464] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:27:16.940 [2024-11-19 01:11:23.537492] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.537509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.542306] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.940 [2024-11-19 01:11:23.542328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:16.940 [2024-11-19 01:11:23.542344] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.542358] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:16.940 [2024-11-19 01:11:23.542375] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:27:16.940 [2024-11-19 01:11:23.542383] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:27:16.940 [2024-11-19 01:11:23.542404] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.542416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.940 [2024-11-19 01:11:23.542456] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.940 [2024-11-19 01:11:23.542465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:27:16.940 [2024-11-19 01:11:23.542476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:27:16.940 [2024-11-19 01:11:23.542484] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.542494] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:27:16.940 [2024-11-19 01:11:23.542504] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.940 [2024-11-19 01:11:23.542520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.941 [2024-11-19 01:11:23.542551] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.941 [2024-11-19 01:11:23.542562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:27:16.941 [2024-11-19 01:11:23.542570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:27:16.941 [2024-11-19 01:11:23.542579] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.542589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:16.941 [2024-11-19 01:11:23.542602] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.542611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.941 [2024-11-19 01:11:23.542639] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.941 [2024-11-19 01:11:23.542647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.941 [2024-11-19 01:11:23.542656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:16.941 [2024-11-19 01:11:23.542663] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.542675] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.542685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.941 [2024-11-19 01:11:23.542712] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.941 [2024-11-19 01:11:23.542719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.941 [2024-11-19 01:11:23.542733] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:16.941 [2024-11-19 01:11:23.542742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:16.941 [2024-11-19 01:11:23.542751] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.542759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:16.941 [2024-11-19 01:11:23.542868] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:27:16.941 [2024-11-19 01:11:23.542875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:16.941 [2024-11-19 01:11:23.542888] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.542898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.941 [2024-11-19 01:11:23.542926] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.941 [2024-11-19 01:11:23.542933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.941 [2024-11-19 01:11:23.542942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:16.941 [2024-11-19 01:11:23.542949] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.542961] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.542975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.941 [2024-11-19 01:11:23.543008] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.941 [2024-11-19 01:11:23.543015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:16.941 [2024-11-19 01:11:23.543025] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:16.941 [2024-11-19 01:11:23.543033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:16.941 [2024-11-19 01:11:23.543041] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543053] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:27:16.941 [2024-11-19 01:11:23.543068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:16.941 [2024-11-19 01:11:23.543087] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543155] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.941 [2024-11-19 01:11:23.543164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.941 [2024-11-19 01:11:23.543177] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:27:16.941 [2024-11-19 01:11:23.543188] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:27:16.941 [2024-11-19 01:11:23.543195] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:27:16.941 [2024-11-19 01:11:23.543209] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 6 00:27:16.941 [2024-11-19 01:11:23.543216] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:27:16.941 [2024-11-19 01:11:23.543225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:27:16.941 [2024-11-19 01:11:23.543232] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:16.941 [2024-11-19 01:11:23.543256] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.941 [2024-11-19 01:11:23.543310] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.941 [2024-11-19 01:11:23.543319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:16.941 [2024-11-19 01:11:23.543329] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.941 [2024-11-19 01:11:23.543350] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.941 [2024-11-19 01:11:23.543368] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.941 [2024-11-19 01:11:23.543387] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.941 [2024-11-19 01:11:23.543404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:16.941 [2024-11-19 01:11:23.543415] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:16.941 [2024-11-19 01:11:23.543440] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.941 [2024-11-19 01:11:23.543481] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.941 [2024-11-19 01:11:23.543488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:27:16.941 [2024-11-19 01:11:23.543497] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:27:16.941 [2024-11-19 01:11:23.543507] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:27:16.941 [2024-11-19 01:11:23.543515] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543539] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0xd4be1bcd 00:27:16.941 [2024-11-19 01:11:23.543597] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.941 [2024-11-19 01:11:23.543607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:16.942 [2024-11-19 01:11:23.543623] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0xd4be1bcd 00:27:16.942 [2024-11-19 01:11:23.543637] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:27:16.942 [2024-11-19 01:11:23.543673] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0xd4be1bcd 00:27:16.942 [2024-11-19 01:11:23.543686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x400 key:0xd4be1bcd 00:27:16.942 [2024-11-19 01:11:23.543695] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0xd4be1bcd 00:27:16.942 [2024-11-19 01:11:23.543707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.942 [2024-11-19 01:11:23.543775] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.942 [2024-11-19 01:11:23.543786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.942 [2024-11-19 01:11:23.543806] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0xd4be1bcd 00:27:16.942 [2024-11-19 01:11:23.543820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0xd4be1bcd 00:27:16.942 [2024-11-19 01:11:23.543828] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0xd4be1bcd 00:27:16.942 [2024-11-19 01:11:23.543837] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.942 [2024-11-19 01:11:23.543844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:16.942 [2024-11-19 01:11:23.543852] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0xd4be1bcd 00:27:16.942 [2024-11-19 01:11:23.543868] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.942 [2024-11-19 01:11:23.543879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.942 [2024-11-19 01:11:23.543893] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0xd4be1bcd 00:27:16.942 [2024-11-19 01:11:23.543904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0xd4be1bcd 00:27:16.942 [2024-11-19 01:11:23.543912] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0xd4be1bcd 00:27:16.942 [2024-11-19 01:11:23.543943] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.942 [2024-11-19 01:11:23.543950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.942 [2024-11-19 01:11:23.543965] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0xd4be1bcd 00:27:16.942 ===================================================== 00:27:16.942 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:16.942 ===================================================== 00:27:16.942 Controller Capabilities/Features 00:27:16.942 ================================ 00:27:16.942 Vendor ID: 0000 00:27:16.942 Subsystem Vendor ID: 0000 00:27:16.942 Serial Number: .................... 00:27:16.942 Model Number: ........................................ 00:27:16.942 Firmware Version: 25.01 00:27:16.942 Recommended Arb Burst: 0 00:27:16.942 IEEE OUI Identifier: 00 00 00 00:27:16.942 Multi-path I/O 00:27:16.942 May have multiple subsystem ports: No 00:27:16.942 May have multiple controllers: No 00:27:16.942 Associated with SR-IOV VF: No 00:27:16.942 Max Data Transfer Size: 131072 00:27:16.942 Max Number of Namespaces: 0 00:27:16.942 Max Number of I/O Queues: 1024 00:27:16.942 NVMe Specification Version (VS): 1.3 00:27:16.942 NVMe Specification Version (Identify): 1.3 00:27:16.942 Maximum Queue Entries: 128 00:27:16.942 Contiguous Queues Required: Yes 00:27:16.942 Arbitration Mechanisms Supported 00:27:16.942 Weighted Round Robin: Not Supported 00:27:16.942 Vendor Specific: Not Supported 00:27:16.942 Reset Timeout: 15000 ms 00:27:16.942 Doorbell Stride: 4 bytes 00:27:16.942 NVM Subsystem Reset: Not Supported 00:27:16.942 Command Sets Supported 00:27:16.942 NVM Command Set: Supported 00:27:16.942 Boot Partition: Not Supported 00:27:16.942 Memory Page Size Minimum: 4096 bytes 00:27:16.942 Memory Page Size Maximum: 4096 bytes 00:27:16.942 Persistent Memory Region: Not Supported 00:27:16.942 Optional Asynchronous Events Supported 00:27:16.942 Namespace Attribute Notices: Not Supported 00:27:16.942 Firmware Activation Notices: Not Supported 00:27:16.942 ANA Change Notices: Not Supported 00:27:16.942 PLE Aggregate Log Change Notices: Not Supported 00:27:16.942 LBA Status Info Alert Notices: Not Supported 00:27:16.942 EGE Aggregate Log Change Notices: Not Supported 00:27:16.942 Normal NVM Subsystem Shutdown event: Not Supported 00:27:16.942 Zone Descriptor Change Notices: Not Supported 00:27:16.942 Discovery Log Change Notices: Supported 00:27:16.942 Controller Attributes 00:27:16.942 128-bit Host Identifier: Not Supported 00:27:16.942 Non-Operational Permissive Mode: Not Supported 00:27:16.942 NVM Sets: Not Supported 00:27:16.942 Read Recovery Levels: Not Supported 00:27:16.942 Endurance Groups: Not Supported 00:27:16.942 Predictable Latency Mode: Not Supported 00:27:16.942 Traffic Based Keep ALive: Not Supported 00:27:16.942 Namespace Granularity: Not Supported 00:27:16.942 SQ Associations: Not Supported 00:27:16.942 UUID List: Not Supported 00:27:16.942 Multi-Domain Subsystem: Not Supported 00:27:16.942 Fixed Capacity Management: Not Supported 00:27:16.942 Variable Capacity Management: Not Supported 00:27:16.942 Delete Endurance Group: Not Supported 00:27:16.942 Delete NVM Set: Not Supported 00:27:16.942 Extended LBA Formats Supported: Not Supported 00:27:16.942 Flexible Data Placement Supported: Not Supported 00:27:16.942 00:27:16.942 Controller Memory Buffer Support 00:27:16.942 ================================ 00:27:16.942 Supported: No 00:27:16.942 00:27:16.942 Persistent Memory Region Support 00:27:16.942 ================================ 00:27:16.942 Supported: No 00:27:16.942 00:27:16.942 Admin Command Set Attributes 00:27:16.942 ============================ 00:27:16.942 Security Send/Receive: Not Supported 00:27:16.942 Format NVM: Not Supported 00:27:16.942 Firmware Activate/Download: Not Supported 00:27:16.942 Namespace Management: Not Supported 00:27:16.942 Device Self-Test: Not Supported 00:27:16.942 Directives: Not Supported 00:27:16.942 NVMe-MI: Not Supported 00:27:16.942 Virtualization Management: Not Supported 00:27:16.942 Doorbell Buffer Config: Not Supported 00:27:16.942 Get LBA Status Capability: Not Supported 00:27:16.942 Command & Feature Lockdown Capability: Not Supported 00:27:16.942 Abort Command Limit: 1 00:27:16.942 Async Event Request Limit: 4 00:27:16.942 Number of Firmware Slots: N/A 00:27:16.942 Firmware Slot 1 Read-Only: N/A 00:27:16.942 Firmware Activation Without Reset: N/A 00:27:16.942 Multiple Update Detection Support: N/A 00:27:16.942 Firmware Update Granularity: No Information Provided 00:27:16.942 Per-Namespace SMART Log: No 00:27:16.942 Asymmetric Namespace Access Log Page: Not Supported 00:27:16.942 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:16.942 Command Effects Log Page: Not Supported 00:27:16.942 Get Log Page Extended Data: Supported 00:27:16.942 Telemetry Log Pages: Not Supported 00:27:16.942 Persistent Event Log Pages: Not Supported 00:27:16.942 Supported Log Pages Log Page: May Support 00:27:16.942 Commands Supported & Effects Log Page: Not Supported 00:27:16.942 Feature Identifiers & Effects Log Page:May Support 00:27:16.942 NVMe-MI Commands & Effects Log Page: May Support 00:27:16.942 Data Area 4 for Telemetry Log: Not Supported 00:27:16.942 Error Log Page Entries Supported: 128 00:27:16.942 Keep Alive: Not Supported 00:27:16.942 00:27:16.942 NVM Command Set Attributes 00:27:16.942 ========================== 00:27:16.942 Submission Queue Entry Size 00:27:16.942 Max: 1 00:27:16.942 Min: 1 00:27:16.942 Completion Queue Entry Size 00:27:16.942 Max: 1 00:27:16.942 Min: 1 00:27:16.942 Number of Namespaces: 0 00:27:16.942 Compare Command: Not Supported 00:27:16.942 Write Uncorrectable Command: Not Supported 00:27:16.942 Dataset Management Command: Not Supported 00:27:16.942 Write Zeroes Command: Not Supported 00:27:16.942 Set Features Save Field: Not Supported 00:27:16.942 Reservations: Not Supported 00:27:16.942 Timestamp: Not Supported 00:27:16.943 Copy: Not Supported 00:27:16.943 Volatile Write Cache: Not Present 00:27:16.943 Atomic Write Unit (Normal): 1 00:27:16.943 Atomic Write Unit (PFail): 1 00:27:16.943 Atomic Compare & Write Unit: 1 00:27:16.943 Fused Compare & Write: Supported 00:27:16.943 Scatter-Gather List 00:27:16.943 SGL Command Set: Supported 00:27:16.943 SGL Keyed: Supported 00:27:16.943 SGL Bit Bucket Descriptor: Not Supported 00:27:16.943 SGL Metadata Pointer: Not Supported 00:27:16.943 Oversized SGL: Not Supported 00:27:16.943 SGL Metadata Address: Not Supported 00:27:16.943 SGL Offset: Supported 00:27:16.943 Transport SGL Data Block: Not Supported 00:27:16.943 Replay Protected Memory Block: Not Supported 00:27:16.943 00:27:16.943 Firmware Slot Information 00:27:16.943 ========================= 00:27:16.943 Active slot: 0 00:27:16.943 00:27:16.943 00:27:16.943 Error Log 00:27:16.943 ========= 00:27:16.943 00:27:16.943 Active Namespaces 00:27:16.943 ================= 00:27:16.943 Discovery Log Page 00:27:16.943 ================== 00:27:16.943 Generation Counter: 2 00:27:16.943 Number of Records: 2 00:27:16.943 Record Format: 0 00:27:16.943 00:27:16.943 Discovery Log Entry 0 00:27:16.943 ---------------------- 00:27:16.943 Transport Type: 1 (RDMA) 00:27:16.943 Address Family: 1 (IPv4) 00:27:16.943 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:16.943 Entry Flags: 00:27:16.943 Duplicate Returned Information: 1 00:27:16.943 Explicit Persistent Connection Support for Discovery: 1 00:27:16.943 Transport Requirements: 00:27:16.943 Secure Channel: Not Required 00:27:16.943 Port ID: 0 (0x0000) 00:27:16.943 Controller ID: 65535 (0xffff) 00:27:16.943 Admin Max SQ Size: 128 00:27:16.943 Transport Service Identifier: 4420 00:27:16.943 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:16.943 Transport Address: 192.168.100.8 00:27:16.943 Transport Specific Address Subtype - RDMA 00:27:16.943 RDMA QP Service Type: 1 (Reliable Connected) 00:27:16.943 RDMA Provider Type: 1 (No provider specified) 00:27:16.943 RDMA CM Service: 1 (RDMA_CM) 00:27:16.943 Discovery Log Entry 1 00:27:16.943 ---------------------- 00:27:16.943 Transport Type: 1 (RDMA) 00:27:16.943 Address Family: 1 (IPv4) 00:27:16.943 Subsystem Type: 2 (NVM Subsystem) 00:27:16.943 Entry Flags: 00:27:16.943 Duplicate Returned Information: 0 00:27:16.943 Explicit Persistent Connection Support for Discovery: 0 00:27:16.943 Transport Requirements: 00:27:16.943 Secure Channel: Not Required 00:27:16.943 Port ID: 0 (0x0000) 00:27:16.943 Controller ID: 65535 (0xffff) 00:27:16.943 Admin Max SQ Size: [2024-11-19 01:11:23.544065] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:27:16.943 [2024-11-19 01:11:23.544085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544128] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.943 [2024-11-19 01:11:23.544169] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.943 [2024-11-19 01:11:23.544177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544190] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.943 [2024-11-19 01:11:23.544209] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544240] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.943 [2024-11-19 01:11:23.544250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544259] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:27:16.943 [2024-11-19 01:11:23.544269] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:27:16.943 [2024-11-19 01:11:23.544277] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544300] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.943 [2024-11-19 01:11:23.544346] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.943 [2024-11-19 01:11:23.544353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544362] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544373] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.943 [2024-11-19 01:11:23.544411] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.943 [2024-11-19 01:11:23.544420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544427] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544439] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.943 [2024-11-19 01:11:23.544484] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.943 [2024-11-19 01:11:23.544491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544503] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544513] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.943 [2024-11-19 01:11:23.544550] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.943 [2024-11-19 01:11:23.544558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544565] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544577] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.943 [2024-11-19 01:11:23.544620] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.943 [2024-11-19 01:11:23.544627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544635] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544645] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.943 [2024-11-19 01:11:23.544684] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.943 [2024-11-19 01:11:23.544693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:16.943 [2024-11-19 01:11:23.544700] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544714] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.943 [2024-11-19 01:11:23.544723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.943 [2024-11-19 01:11:23.544758] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.943 [2024-11-19 01:11:23.544765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.544773] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.544784] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.544796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.544832] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.544841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.544848] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.544863] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.544872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.544901] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.544908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.544916] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.544926] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.544941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.544960] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.544968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.544975] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.544987] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.544996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545023] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545042] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545052] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545087] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545102] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545114] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545152] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545168] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545178] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545214] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545231] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545244] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545284] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545305] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545316] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545349] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545367] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545381] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545425] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545441] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545451] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545490] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545505] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545518] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545555] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545574] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545584] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545622] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545639] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545651] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545691] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545708] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545718] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545758] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.944 [2024-11-19 01:11:23.545766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:16.944 [2024-11-19 01:11:23.545773] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545787] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.944 [2024-11-19 01:11:23.545796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.944 [2024-11-19 01:11:23.545826] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.945 [2024-11-19 01:11:23.545833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:16.945 [2024-11-19 01:11:23.545841] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.545851] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.545862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.945 [2024-11-19 01:11:23.545888] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.945 [2024-11-19 01:11:23.545897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:16.945 [2024-11-19 01:11:23.545904] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.545917] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.545926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.945 [2024-11-19 01:11:23.545958] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.945 [2024-11-19 01:11:23.545964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:16.945 [2024-11-19 01:11:23.545974] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.545984] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.546000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.945 [2024-11-19 01:11:23.546020] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.945 [2024-11-19 01:11:23.546030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:27:16.945 [2024-11-19 01:11:23.546037] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.546049] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.546059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.945 [2024-11-19 01:11:23.546086] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.945 [2024-11-19 01:11:23.546093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:27:16.945 [2024-11-19 01:11:23.546103] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.546113] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.546124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.945 [2024-11-19 01:11:23.546145] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.945 [2024-11-19 01:11:23.546153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:27:16.945 [2024-11-19 01:11:23.546160] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.546171] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.546181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.945 [2024-11-19 01:11:23.546212] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.945 [2024-11-19 01:11:23.546219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:27:16.945 [2024-11-19 01:11:23.546227] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.546238] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.546249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.945 [2024-11-19 01:11:23.546282] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.945 [2024-11-19 01:11:23.546291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:27:16.945 [2024-11-19 01:11:23.550315] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.550335] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.550346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:16.945 [2024-11-19 01:11:23.550390] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:16.945 [2024-11-19 01:11:23.550398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0010 p:0 m:0 dnr:0 00:27:16.945 [2024-11-19 01:11:23.550407] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0xd4be1bcd 00:27:16.945 [2024-11-19 01:11:23.550415] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:27:17.208 128 00:27:17.208 Transport Service Identifier: 4420 00:27:17.208 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:17.208 Transport Address: 192.168.100.8 00:27:17.208 Transport Specific Address Subtype - RDMA 00:27:17.208 RDMA QP Service Type: 1 (Reliable Connected) 00:27:17.208 RDMA Provider Type: 1 (No provider specified) 00:27:17.208 RDMA CM Service: 1 (RDMA_CM) 00:27:17.208 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:17.208 [2024-11-19 01:11:23.694777] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:17.208 [2024-11-19 01:11:23.694842] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460199 ] 00:27:17.208 [2024-11-19 01:11:23.760554] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:27:17.208 [2024-11-19 01:11:23.760658] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:27:17.208 [2024-11-19 01:11:23.760679] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:27:17.208 [2024-11-19 01:11:23.760687] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:27:17.208 [2024-11-19 01:11:23.760727] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:27:17.208 [2024-11-19 01:11:23.777662] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:27:17.208 [2024-11-19 01:11:23.793176] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:17.208 [2024-11-19 01:11:23.793196] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:27:17.208 [2024-11-19 01:11:23.793213] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793223] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793232] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793240] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793247] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793254] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793262] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793271] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793279] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793286] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793302] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793309] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793317] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793323] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793331] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793340] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793348] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793354] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793364] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793371] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793379] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793389] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793397] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793405] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793413] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793419] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793427] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793433] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793441] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793447] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793455] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793461] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:27:17.209 [2024-11-19 01:11:23.793472] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:17.209 [2024-11-19 01:11:23.793479] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:27:17.209 [2024-11-19 01:11:23.793506] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.793524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798311] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.209 [2024-11-19 01:11:23.798335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:17.209 [2024-11-19 01:11:23.798347] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798357] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:17.209 [2024-11-19 01:11:23.798373] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:27:17.209 [2024-11-19 01:11:23.798382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:27:17.209 [2024-11-19 01:11:23.798401] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.209 [2024-11-19 01:11:23.798447] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.209 [2024-11-19 01:11:23.798457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:27:17.209 [2024-11-19 01:11:23.798469] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:27:17.209 [2024-11-19 01:11:23.798477] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:27:17.209 [2024-11-19 01:11:23.798498] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.209 [2024-11-19 01:11:23.798539] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.209 [2024-11-19 01:11:23.798548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:27:17.209 [2024-11-19 01:11:23.798556] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:27:17.209 [2024-11-19 01:11:23.798565] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:17.209 [2024-11-19 01:11:23.798587] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.209 [2024-11-19 01:11:23.798633] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.209 [2024-11-19 01:11:23.798640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:17.209 [2024-11-19 01:11:23.798649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:17.209 [2024-11-19 01:11:23.798657] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798671] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.209 [2024-11-19 01:11:23.798713] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.209 [2024-11-19 01:11:23.798722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:17.209 [2024-11-19 01:11:23.798733] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:17.209 [2024-11-19 01:11:23.798742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:17.209 [2024-11-19 01:11:23.798751] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:17.209 [2024-11-19 01:11:23.798869] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:27:17.209 [2024-11-19 01:11:23.798876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:17.209 [2024-11-19 01:11:23.798889] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.209 [2024-11-19 01:11:23.798934] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.209 [2024-11-19 01:11:23.798941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:17.209 [2024-11-19 01:11:23.798950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:17.209 [2024-11-19 01:11:23.798957] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798969] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.209 [2024-11-19 01:11:23.798984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.210 [2024-11-19 01:11:23.799012] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.210 [2024-11-19 01:11:23.799019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:17.210 [2024-11-19 01:11:23.799029] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:17.210 [2024-11-19 01:11:23.799037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799046] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799054] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:27:17.210 [2024-11-19 01:11:23.799068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799086] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799163] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.210 [2024-11-19 01:11:23.799172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:17.210 [2024-11-19 01:11:23.799185] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:27:17.210 [2024-11-19 01:11:23.799194] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:27:17.210 [2024-11-19 01:11:23.799203] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:27:17.210 [2024-11-19 01:11:23.799212] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 6 00:27:17.210 [2024-11-19 01:11:23.799220] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:27:17.210 [2024-11-19 01:11:23.799229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799236] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799259] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.210 [2024-11-19 01:11:23.799308] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.210 [2024-11-19 01:11:23.799318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:17.210 [2024-11-19 01:11:23.799328] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.210 [2024-11-19 01:11:23.799348] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.210 [2024-11-19 01:11:23.799370] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.210 [2024-11-19 01:11:23.799388] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.210 [2024-11-19 01:11:23.799404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799413] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799438] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.210 [2024-11-19 01:11:23.799484] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.210 [2024-11-19 01:11:23.799491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:27:17.210 [2024-11-19 01:11:23.799501] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:27:17.210 [2024-11-19 01:11:23.799508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799518] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799526] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799550] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.210 [2024-11-19 01:11:23.799594] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.210 [2024-11-19 01:11:23.799603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:27:17.210 [2024-11-19 01:11:23.799675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799684] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799719] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799780] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.210 [2024-11-19 01:11:23.799787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:17.210 [2024-11-19 01:11:23.799811] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:27:17.210 [2024-11-19 01:11:23.799826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799836] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799860] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799935] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.210 [2024-11-19 01:11:23.799942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:17.210 [2024-11-19 01:11:23.799960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799968] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.799979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.799992] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.800009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.800041] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.210 [2024-11-19 01:11:23.800050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:17.210 [2024-11-19 01:11:23.800067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.800077] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x95c2d153 00:27:17.210 [2024-11-19 01:11:23.800085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:27:17.210 [2024-11-19 01:11:23.800097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:27:17.211 [2024-11-19 01:11:23.800107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:17.211 [2024-11-19 01:11:23.800116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:17.211 [2024-11-19 01:11:23.800123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:27:17.211 [2024-11-19 01:11:23.800132] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:27:17.211 [2024-11-19 01:11:23.800139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:27:17.211 [2024-11-19 01:11:23.800148] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:27:17.211 [2024-11-19 01:11:23.800175] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.211 [2024-11-19 01:11:23.800201] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.211 [2024-11-19 01:11:23.800241] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.211 [2024-11-19 01:11:23.800252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:17.211 [2024-11-19 01:11:23.800259] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800268] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.211 [2024-11-19 01:11:23.800275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:17.211 [2024-11-19 01:11:23.800283] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800299] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.211 [2024-11-19 01:11:23.800340] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.211 [2024-11-19 01:11:23.800349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:17.211 [2024-11-19 01:11:23.800356] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800368] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.211 [2024-11-19 01:11:23.800408] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.211 [2024-11-19 01:11:23.800415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:17.211 [2024-11-19 01:11:23.800425] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800434] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.211 [2024-11-19 01:11:23.800472] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.211 [2024-11-19 01:11:23.800481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:27:17.211 [2024-11-19 01:11:23.800488] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800508] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800532] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800556] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c8000 len:0x200 key:0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800592] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c6000 len:0x1000 key:0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800615] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.211 [2024-11-19 01:11:23.800623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:17.211 [2024-11-19 01:11:23.800648] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800655] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.211 [2024-11-19 01:11:23.800663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:17.211 [2024-11-19 01:11:23.800674] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800683] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.211 [2024-11-19 01:11:23.800690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:17.211 [2024-11-19 01:11:23.800700] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x95c2d153 00:27:17.211 [2024-11-19 01:11:23.800706] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.211 [2024-11-19 01:11:23.800714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:17.211 [2024-11-19 01:11:23.800729] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x95c2d153 00:27:17.211 ===================================================== 00:27:17.211 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:17.211 ===================================================== 00:27:17.211 Controller Capabilities/Features 00:27:17.211 ================================ 00:27:17.211 Vendor ID: 8086 00:27:17.211 Subsystem Vendor ID: 8086 00:27:17.211 Serial Number: SPDK00000000000001 00:27:17.211 Model Number: SPDK bdev Controller 00:27:17.211 Firmware Version: 25.01 00:27:17.211 Recommended Arb Burst: 6 00:27:17.211 IEEE OUI Identifier: e4 d2 5c 00:27:17.211 Multi-path I/O 00:27:17.211 May have multiple subsystem ports: Yes 00:27:17.211 May have multiple controllers: Yes 00:27:17.211 Associated with SR-IOV VF: No 00:27:17.211 Max Data Transfer Size: 131072 00:27:17.211 Max Number of Namespaces: 32 00:27:17.211 Max Number of I/O Queues: 127 00:27:17.211 NVMe Specification Version (VS): 1.3 00:27:17.211 NVMe Specification Version (Identify): 1.3 00:27:17.211 Maximum Queue Entries: 128 00:27:17.211 Contiguous Queues Required: Yes 00:27:17.211 Arbitration Mechanisms Supported 00:27:17.211 Weighted Round Robin: Not Supported 00:27:17.211 Vendor Specific: Not Supported 00:27:17.211 Reset Timeout: 15000 ms 00:27:17.211 Doorbell Stride: 4 bytes 00:27:17.211 NVM Subsystem Reset: Not Supported 00:27:17.211 Command Sets Supported 00:27:17.211 NVM Command Set: Supported 00:27:17.211 Boot Partition: Not Supported 00:27:17.211 Memory Page Size Minimum: 4096 bytes 00:27:17.211 Memory Page Size Maximum: 4096 bytes 00:27:17.211 Persistent Memory Region: Not Supported 00:27:17.211 Optional Asynchronous Events Supported 00:27:17.211 Namespace Attribute Notices: Supported 00:27:17.211 Firmware Activation Notices: Not Supported 00:27:17.211 ANA Change Notices: Not Supported 00:27:17.211 PLE Aggregate Log Change Notices: Not Supported 00:27:17.211 LBA Status Info Alert Notices: Not Supported 00:27:17.211 EGE Aggregate Log Change Notices: Not Supported 00:27:17.211 Normal NVM Subsystem Shutdown event: Not Supported 00:27:17.211 Zone Descriptor Change Notices: Not Supported 00:27:17.211 Discovery Log Change Notices: Not Supported 00:27:17.211 Controller Attributes 00:27:17.211 128-bit Host Identifier: Supported 00:27:17.211 Non-Operational Permissive Mode: Not Supported 00:27:17.211 NVM Sets: Not Supported 00:27:17.211 Read Recovery Levels: Not Supported 00:27:17.211 Endurance Groups: Not Supported 00:27:17.211 Predictable Latency Mode: Not Supported 00:27:17.211 Traffic Based Keep ALive: Not Supported 00:27:17.211 Namespace Granularity: Not Supported 00:27:17.211 SQ Associations: Not Supported 00:27:17.211 UUID List: Not Supported 00:27:17.212 Multi-Domain Subsystem: Not Supported 00:27:17.212 Fixed Capacity Management: Not Supported 00:27:17.212 Variable Capacity Management: Not Supported 00:27:17.212 Delete Endurance Group: Not Supported 00:27:17.212 Delete NVM Set: Not Supported 00:27:17.212 Extended LBA Formats Supported: Not Supported 00:27:17.212 Flexible Data Placement Supported: Not Supported 00:27:17.212 00:27:17.212 Controller Memory Buffer Support 00:27:17.212 ================================ 00:27:17.212 Supported: No 00:27:17.212 00:27:17.212 Persistent Memory Region Support 00:27:17.212 ================================ 00:27:17.212 Supported: No 00:27:17.212 00:27:17.212 Admin Command Set Attributes 00:27:17.212 ============================ 00:27:17.212 Security Send/Receive: Not Supported 00:27:17.212 Format NVM: Not Supported 00:27:17.212 Firmware Activate/Download: Not Supported 00:27:17.212 Namespace Management: Not Supported 00:27:17.212 Device Self-Test: Not Supported 00:27:17.212 Directives: Not Supported 00:27:17.212 NVMe-MI: Not Supported 00:27:17.212 Virtualization Management: Not Supported 00:27:17.212 Doorbell Buffer Config: Not Supported 00:27:17.212 Get LBA Status Capability: Not Supported 00:27:17.212 Command & Feature Lockdown Capability: Not Supported 00:27:17.212 Abort Command Limit: 4 00:27:17.212 Async Event Request Limit: 4 00:27:17.212 Number of Firmware Slots: N/A 00:27:17.212 Firmware Slot 1 Read-Only: N/A 00:27:17.212 Firmware Activation Without Reset: N/A 00:27:17.212 Multiple Update Detection Support: N/A 00:27:17.212 Firmware Update Granularity: No Information Provided 00:27:17.212 Per-Namespace SMART Log: No 00:27:17.212 Asymmetric Namespace Access Log Page: Not Supported 00:27:17.212 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:17.212 Command Effects Log Page: Supported 00:27:17.212 Get Log Page Extended Data: Supported 00:27:17.212 Telemetry Log Pages: Not Supported 00:27:17.212 Persistent Event Log Pages: Not Supported 00:27:17.212 Supported Log Pages Log Page: May Support 00:27:17.212 Commands Supported & Effects Log Page: Not Supported 00:27:17.212 Feature Identifiers & Effects Log Page:May Support 00:27:17.212 NVMe-MI Commands & Effects Log Page: May Support 00:27:17.212 Data Area 4 for Telemetry Log: Not Supported 00:27:17.212 Error Log Page Entries Supported: 128 00:27:17.212 Keep Alive: Supported 00:27:17.212 Keep Alive Granularity: 10000 ms 00:27:17.212 00:27:17.212 NVM Command Set Attributes 00:27:17.212 ========================== 00:27:17.212 Submission Queue Entry Size 00:27:17.212 Max: 64 00:27:17.212 Min: 64 00:27:17.212 Completion Queue Entry Size 00:27:17.212 Max: 16 00:27:17.212 Min: 16 00:27:17.212 Number of Namespaces: 32 00:27:17.212 Compare Command: Supported 00:27:17.212 Write Uncorrectable Command: Not Supported 00:27:17.212 Dataset Management Command: Supported 00:27:17.212 Write Zeroes Command: Supported 00:27:17.212 Set Features Save Field: Not Supported 00:27:17.212 Reservations: Supported 00:27:17.212 Timestamp: Not Supported 00:27:17.212 Copy: Supported 00:27:17.212 Volatile Write Cache: Present 00:27:17.212 Atomic Write Unit (Normal): 1 00:27:17.212 Atomic Write Unit (PFail): 1 00:27:17.212 Atomic Compare & Write Unit: 1 00:27:17.212 Fused Compare & Write: Supported 00:27:17.212 Scatter-Gather List 00:27:17.212 SGL Command Set: Supported 00:27:17.212 SGL Keyed: Supported 00:27:17.212 SGL Bit Bucket Descriptor: Not Supported 00:27:17.212 SGL Metadata Pointer: Not Supported 00:27:17.212 Oversized SGL: Not Supported 00:27:17.212 SGL Metadata Address: Not Supported 00:27:17.212 SGL Offset: Supported 00:27:17.212 Transport SGL Data Block: Not Supported 00:27:17.212 Replay Protected Memory Block: Not Supported 00:27:17.212 00:27:17.212 Firmware Slot Information 00:27:17.212 ========================= 00:27:17.212 Active slot: 1 00:27:17.212 Slot 1 Firmware Revision: 25.01 00:27:17.212 00:27:17.212 00:27:17.212 Commands Supported and Effects 00:27:17.212 ============================== 00:27:17.212 Admin Commands 00:27:17.212 -------------- 00:27:17.212 Get Log Page (02h): Supported 00:27:17.212 Identify (06h): Supported 00:27:17.212 Abort (08h): Supported 00:27:17.212 Set Features (09h): Supported 00:27:17.212 Get Features (0Ah): Supported 00:27:17.212 Asynchronous Event Request (0Ch): Supported 00:27:17.212 Keep Alive (18h): Supported 00:27:17.212 I/O Commands 00:27:17.212 ------------ 00:27:17.212 Flush (00h): Supported LBA-Change 00:27:17.212 Write (01h): Supported LBA-Change 00:27:17.212 Read (02h): Supported 00:27:17.212 Compare (05h): Supported 00:27:17.212 Write Zeroes (08h): Supported LBA-Change 00:27:17.212 Dataset Management (09h): Supported LBA-Change 00:27:17.212 Copy (19h): Supported LBA-Change 00:27:17.212 00:27:17.212 Error Log 00:27:17.212 ========= 00:27:17.212 00:27:17.212 Arbitration 00:27:17.212 =========== 00:27:17.212 Arbitration Burst: 1 00:27:17.212 00:27:17.212 Power Management 00:27:17.212 ================ 00:27:17.212 Number of Power States: 1 00:27:17.212 Current Power State: Power State #0 00:27:17.212 Power State #0: 00:27:17.212 Max Power: 0.00 W 00:27:17.212 Non-Operational State: Operational 00:27:17.212 Entry Latency: Not Reported 00:27:17.212 Exit Latency: Not Reported 00:27:17.212 Relative Read Throughput: 0 00:27:17.212 Relative Read Latency: 0 00:27:17.212 Relative Write Throughput: 0 00:27:17.212 Relative Write Latency: 0 00:27:17.212 Idle Power: Not Reported 00:27:17.212 Active Power: Not Reported 00:27:17.212 Non-Operational Permissive Mode: Not Supported 00:27:17.212 00:27:17.212 Health Information 00:27:17.212 ================== 00:27:17.212 Critical Warnings: 00:27:17.212 Available Spare Space: OK 00:27:17.212 Temperature: OK 00:27:17.212 Device Reliability: OK 00:27:17.212 Read Only: No 00:27:17.212 Volatile Memory Backup: OK 00:27:17.212 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:17.212 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:17.212 Available Spare: 0% 00:27:17.212 Available Spare Threshold: 0% 00:27:17.212 Life Percentage [2024-11-19 01:11:23.800855] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x95c2d153 00:27:17.212 [2024-11-19 01:11:23.800867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.212 [2024-11-19 01:11:23.800903] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.212 [2024-11-19 01:11:23.800911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:17.212 [2024-11-19 01:11:23.800923] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x95c2d153 00:27:17.212 [2024-11-19 01:11:23.800964] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:27:17.212 [2024-11-19 01:11:23.800981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.212 [2024-11-19 01:11:23.800991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.212 [2024-11-19 01:11:23.801000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.212 [2024-11-19 01:11:23.801008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.212 [2024-11-19 01:11:23.801025] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x95c2d153 00:27:17.212 [2024-11-19 01:11:23.801035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.212 [2024-11-19 01:11:23.801064] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.212 [2024-11-19 01:11:23.801076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:27:17.212 [2024-11-19 01:11:23.801092] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.212 [2024-11-19 01:11:23.801104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.212 [2024-11-19 01:11:23.801114] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x95c2d153 00:27:17.212 [2024-11-19 01:11:23.801144] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801160] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:27:17.213 [2024-11-19 01:11:23.801168] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:27:17.213 [2024-11-19 01:11:23.801176] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801191] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801231] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801247] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801258] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801291] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801313] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801325] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801366] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801382] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801394] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801432] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801448] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801460] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801503] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801518] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801529] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801568] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801586] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801598] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801645] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801662] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801672] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801713] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801729] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801742] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801778] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801793] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801804] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801839] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801857] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801875] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801917] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801932] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801945] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.801956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.801980] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.213 [2024-11-19 01:11:23.801989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:17.213 [2024-11-19 01:11:23.801996] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.802008] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.213 [2024-11-19 01:11:23.802016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.213 [2024-11-19 01:11:23.802051] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.214 [2024-11-19 01:11:23.802058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:17.214 [2024-11-19 01:11:23.802068] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x95c2d153 00:27:17.214 [2024-11-19 01:11:23.802078] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.214 [2024-11-19 01:11:23.802089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.214 [2024-11-19 01:11:23.802117] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.214 [2024-11-19 01:11:23.802127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:17.214 [2024-11-19 01:11:23.802134] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x95c2d153 00:27:17.214 [2024-11-19 01:11:23.802147] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.214 [2024-11-19 01:11:23.802156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.214 [2024-11-19 01:11:23.802188] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.214 [2024-11-19 01:11:23.802195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:17.214 [2024-11-19 01:11:23.802203] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x95c2d153 00:27:17.214 [2024-11-19 01:11:23.802214] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.214 [2024-11-19 01:11:23.802224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.214 [2024-11-19 01:11:23.802249] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.214 [2024-11-19 01:11:23.802257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:17.214 [2024-11-19 01:11:23.802264] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x95c2d153 00:27:17.214 [2024-11-19 01:11:23.802278] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.214 [2024-11-19 01:11:23.806301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.214 [2024-11-19 01:11:23.806332] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.214 [2024-11-19 01:11:23.806341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:27:17.214 [2024-11-19 01:11:23.806351] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x95c2d153 00:27:17.214 [2024-11-19 01:11:23.806366] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x95c2d153 00:27:17.214 [2024-11-19 01:11:23.806378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:17.214 [2024-11-19 01:11:23.806408] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:17.214 [2024-11-19 01:11:23.806416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:27:17.214 [2024-11-19 01:11:23.806424] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x95c2d153 00:27:17.214 [2024-11-19 01:11:23.806434] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:27:17.473 Used: 0% 00:27:17.473 Data Units Read: 0 00:27:17.473 Data Units Written: 0 00:27:17.473 Host Read Commands: 0 00:27:17.473 Host Write Commands: 0 00:27:17.473 Controller Busy Time: 0 minutes 00:27:17.473 Power Cycles: 0 00:27:17.473 Power On Hours: 0 hours 00:27:17.473 Unsafe Shutdowns: 0 00:27:17.473 Unrecoverable Media Errors: 0 00:27:17.473 Lifetime Error Log Entries: 0 00:27:17.473 Warning Temperature Time: 0 minutes 00:27:17.473 Critical Temperature Time: 0 minutes 00:27:17.473 00:27:17.473 Number of Queues 00:27:17.473 ================ 00:27:17.473 Number of I/O Submission Queues: 127 00:27:17.473 Number of I/O Completion Queues: 127 00:27:17.473 00:27:17.473 Active Namespaces 00:27:17.473 ================= 00:27:17.473 Namespace ID:1 00:27:17.473 Error Recovery Timeout: Unlimited 00:27:17.473 Command Set Identifier: NVM (00h) 00:27:17.473 Deallocate: Supported 00:27:17.473 Deallocated/Unwritten Error: Not Supported 00:27:17.473 Deallocated Read Value: Unknown 00:27:17.473 Deallocate in Write Zeroes: Not Supported 00:27:17.473 Deallocated Guard Field: 0xFFFF 00:27:17.473 Flush: Supported 00:27:17.473 Reservation: Supported 00:27:17.473 Namespace Sharing Capabilities: Multiple Controllers 00:27:17.473 Size (in LBAs): 131072 (0GiB) 00:27:17.473 Capacity (in LBAs): 131072 (0GiB) 00:27:17.473 Utilization (in LBAs): 131072 (0GiB) 00:27:17.473 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:17.473 EUI64: ABCDEF0123456789 00:27:17.473 UUID: e8dbadaa-f720-4954-b791-0cbafdcd0e82 00:27:17.473 Thin Provisioning: Not Supported 00:27:17.473 Per-NS Atomic Units: Yes 00:27:17.473 Atomic Boundary Size (Normal): 0 00:27:17.473 Atomic Boundary Size (PFail): 0 00:27:17.473 Atomic Boundary Offset: 0 00:27:17.473 Maximum Single Source Range Length: 65535 00:27:17.473 Maximum Copy Length: 65535 00:27:17.473 Maximum Source Range Count: 1 00:27:17.473 NGUID/EUI64 Never Reused: No 00:27:17.473 Namespace Write Protected: No 00:27:17.473 Number of LBA Formats: 1 00:27:17.473 Current LBA Format: LBA Format #00 00:27:17.473 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:17.473 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:17.473 rmmod nvme_rdma 00:27:17.473 rmmod nvme_fabrics 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 459950 ']' 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 459950 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 459950 ']' 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 459950 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:17.473 01:11:23 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 459950 00:27:17.473 01:11:24 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:17.473 01:11:24 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:17.473 01:11:24 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 459950' 00:27:17.473 killing process with pid 459950 00:27:17.473 01:11:24 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 459950 00:27:17.473 01:11:24 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 459950 00:27:18.851 01:11:25 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:18.851 01:11:25 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:18.851 00:27:18.851 real 0m9.154s 00:27:18.851 user 0m12.147s 00:27:18.851 sys 0m4.952s 00:27:18.851 01:11:25 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.851 01:11:25 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:18.852 ************************************ 00:27:18.852 END TEST nvmf_identify 00:27:18.852 ************************************ 00:27:18.852 01:11:25 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:27:18.852 01:11:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:18.852 01:11:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.852 01:11:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.852 ************************************ 00:27:18.852 START TEST nvmf_perf 00:27:18.852 ************************************ 00:27:18.852 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:27:19.111 * Looking for test storage... 00:27:19.111 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:27:19.111 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:19.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.112 --rc genhtml_branch_coverage=1 00:27:19.112 --rc genhtml_function_coverage=1 00:27:19.112 --rc genhtml_legend=1 00:27:19.112 --rc geninfo_all_blocks=1 00:27:19.112 --rc geninfo_unexecuted_blocks=1 00:27:19.112 00:27:19.112 ' 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:19.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.112 --rc genhtml_branch_coverage=1 00:27:19.112 --rc genhtml_function_coverage=1 00:27:19.112 --rc genhtml_legend=1 00:27:19.112 --rc geninfo_all_blocks=1 00:27:19.112 --rc geninfo_unexecuted_blocks=1 00:27:19.112 00:27:19.112 ' 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:19.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.112 --rc genhtml_branch_coverage=1 00:27:19.112 --rc genhtml_function_coverage=1 00:27:19.112 --rc genhtml_legend=1 00:27:19.112 --rc geninfo_all_blocks=1 00:27:19.112 --rc geninfo_unexecuted_blocks=1 00:27:19.112 00:27:19.112 ' 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:19.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.112 --rc genhtml_branch_coverage=1 00:27:19.112 --rc genhtml_function_coverage=1 00:27:19.112 --rc genhtml_legend=1 00:27:19.112 --rc geninfo_all_blocks=1 00:27:19.112 --rc geninfo_unexecuted_blocks=1 00:27:19.112 00:27:19.112 ' 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:19.112 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:19.112 01:11:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:25.713 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:25.713 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@405 -- # modinfo irdma 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:25.713 Found net devices under 0000:af:00.0: cvl_0_0 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:25.713 Found net devices under 0000:af:00.1: cvl_0_1 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:25.713 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo cvl_0_0 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo cvl_0_1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:27:25.714 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:25.714 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:27:25.714 altname enp175s0f0np0 00:27:25.714 altname ens801f0np0 00:27:25.714 inet 192.168.100.8/24 scope global cvl_0_0 00:27:25.714 valid_lft forever preferred_lft forever 00:27:25.714 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:27:25.714 valid_lft forever preferred_lft forever 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:27:25.714 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:25.714 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:27:25.714 altname enp175s0f1np1 00:27:25.714 altname ens801f1np1 00:27:25.714 inet 192.168.100.9/24 scope global cvl_0_1 00:27:25.714 valid_lft forever preferred_lft forever 00:27:25.714 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:27:25.714 valid_lft forever preferred_lft forever 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo cvl_0_0 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo cvl_0_1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:25.714 192.168.100.9' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:25.714 192.168.100.9' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:25.714 192.168.100.9' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=463468 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 463468 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 463468 ']' 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.714 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.715 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.715 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:25.715 01:11:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:25.715 [2024-11-19 01:11:31.553601] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:25.715 [2024-11-19 01:11:31.553696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.715 [2024-11-19 01:11:31.681768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:25.715 [2024-11-19 01:11:31.791780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.715 [2024-11-19 01:11:31.791825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.715 [2024-11-19 01:11:31.791836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.715 [2024-11-19 01:11:31.791863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.715 [2024-11-19 01:11:31.791872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.715 [2024-11-19 01:11:31.794152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.715 [2024-11-19 01:11:31.794180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.715 [2024-11-19 01:11:31.794283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.715 [2024-11-19 01:11:31.794322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.715 01:11:32 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.715 01:11:32 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:27:25.715 01:11:32 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.715 01:11:32 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:25.715 01:11:32 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:25.715 01:11:32 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.715 01:11:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:25.715 01:11:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:29.001 01:11:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:29.001 01:11:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:29.001 01:11:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:27:29.001 01:11:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:29.259 01:11:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:29.259 01:11:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:27:29.259 01:11:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:29.260 01:11:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:27:29.260 01:11:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:27:29.518 [2024-11-19 01:11:36.126789] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:27:29.518 [2024-11-19 01:11:36.143996] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x61200002a040/0x617000007fc0) succeed. 00:27:29.518 [2024-11-19 01:11:36.153780] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x61200002a1c0/0x617000008340) succeed. 00:27:29.518 [2024-11-19 01:11:36.153811] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:27:29.518 01:11:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:29.776 01:11:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:29.776 01:11:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:30.035 01:11:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:30.035 01:11:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:30.294 01:11:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:30.294 [2024-11-19 01:11:36.958461] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:30.552 01:11:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:30.552 01:11:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:27:30.552 01:11:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:27:30.552 01:11:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:30.552 01:11:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:27:31.930 Initializing NVMe Controllers 00:27:31.930 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:27:31.930 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:27:31.930 Initialization complete. Launching workers. 00:27:31.930 ======================================================== 00:27:31.930 Latency(us) 00:27:31.930 Device Information : IOPS MiB/s Average min max 00:27:31.930 PCIE (0000:5e:00.0) NSID 1 from core 0: 90616.83 353.97 352.52 31.18 4430.16 00:27:31.930 ======================================================== 00:27:31.930 Total : 90616.83 353.97 352.52 31.18 4430.16 00:27:31.930 00:27:32.188 01:11:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:35.476 Initializing NVMe Controllers 00:27:35.476 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:35.476 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:35.476 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:35.476 Initialization complete. Launching workers. 00:27:35.476 ======================================================== 00:27:35.476 Latency(us) 00:27:35.476 Device Information : IOPS MiB/s Average min max 00:27:35.476 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5624.05 21.97 176.14 61.20 4117.29 00:27:35.476 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4470.14 17.46 222.57 88.45 4138.71 00:27:35.476 ======================================================== 00:27:35.476 Total : 10094.20 39.43 196.70 61.20 4138.71 00:27:35.476 00:27:35.476 01:11:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:39.666 Initializing NVMe Controllers 00:27:39.666 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:39.666 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:39.666 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:39.666 Initialization complete. Launching workers. 00:27:39.666 ======================================================== 00:27:39.666 Latency(us) 00:27:39.666 Device Information : IOPS MiB/s Average min max 00:27:39.666 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15917.07 62.18 2003.37 502.86 5728.88 00:27:39.666 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4028.48 15.74 7969.23 7227.16 8269.74 00:27:39.666 ======================================================== 00:27:39.666 Total : 19945.55 77.91 3208.32 502.86 8269.74 00:27:39.666 00:27:39.666 01:11:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:39.666 01:11:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ rdma == \r\d\m\a ]] 00:27:39.666 01:11:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:27:39.666 No valid NVMe controllers or AIO or URING devices found 00:27:39.666 Initializing NVMe Controllers 00:27:39.666 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:39.666 Controller IO queue size 128, less than required. 00:27:39.666 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.666 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:39.666 Controller IO queue size 128, less than required. 00:27:39.666 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.666 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:39.666 WARNING: Some requested NVMe devices were skipped 00:27:39.666 01:11:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:27:44.936 Initializing NVMe Controllers 00:27:44.936 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:44.936 Controller IO queue size 128, less than required. 00:27:44.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:44.936 Controller IO queue size 128, less than required. 00:27:44.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:44.936 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:44.936 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:44.936 Initialization complete. Launching workers. 00:27:44.936 00:27:44.936 ==================== 00:27:44.936 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:44.936 RDMA transport: 00:27:44.936 dev name: rocep175s0f0 00:27:44.936 polls: 254684 00:27:44.936 idle_polls: 250034 00:27:44.936 completions: 35210 00:27:44.936 queued_requests: 1 00:27:44.936 total_send_wrs: 17605 00:27:44.936 send_doorbell_updates: 4185 00:27:44.936 total_recv_wrs: 17732 00:27:44.936 recv_doorbell_updates: 4187 00:27:44.936 --------------------------------- 00:27:44.936 00:27:44.936 ==================== 00:27:44.936 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:44.936 RDMA transport: 00:27:44.936 dev name: rocep175s0f0 00:27:44.936 polls: 256198 00:27:44.936 idle_polls: 249599 00:27:44.936 completions: 42718 00:27:44.936 queued_requests: 1 00:27:44.936 total_send_wrs: 21359 00:27:44.936 send_doorbell_updates: 5738 00:27:44.936 total_recv_wrs: 21486 00:27:44.936 recv_doorbell_updates: 5739 00:27:44.936 --------------------------------- 00:27:44.936 ======================================================== 00:27:44.936 Latency(us) 00:27:44.936 Device Information : IOPS MiB/s Average min max 00:27:44.936 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4401.00 1100.25 29488.21 17971.95 250465.58 00:27:44.936 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5339.50 1334.88 24593.44 15266.71 434990.07 00:27:44.936 ======================================================== 00:27:44.936 Total : 9740.50 2435.12 26805.02 15266.71 434990.07 00:27:44.936 00:27:44.936 01:11:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:44.936 01:11:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:44.936 01:11:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:44.936 01:11:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:27:44.936 01:11:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=b78c1ac7-8e9f-44c8-83e3-94572ea73eae 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb b78c1ac7-8e9f-44c8-83e3-94572ea73eae 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=b78c1ac7-8e9f-44c8-83e3-94572ea73eae 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:27:48.223 { 00:27:48.223 "uuid": "b78c1ac7-8e9f-44c8-83e3-94572ea73eae", 00:27:48.223 "name": "lvs_0", 00:27:48.223 "base_bdev": "Nvme0n1", 00:27:48.223 "total_data_clusters": 238234, 00:27:48.223 "free_clusters": 238234, 00:27:48.223 "block_size": 512, 00:27:48.223 "cluster_size": 4194304 00:27:48.223 } 00:27:48.223 ]' 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="b78c1ac7-8e9f-44c8-83e3-94572ea73eae") .free_clusters' 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="b78c1ac7-8e9f-44c8-83e3-94572ea73eae") .cluster_size' 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:27:48.223 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:27:48.223 952936 00:27:48.224 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:27:48.224 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:48.224 01:11:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b78c1ac7-8e9f-44c8-83e3-94572ea73eae lbd_0 20480 00:27:48.790 01:11:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=60280e89-6760-48c1-872c-1455859344ce 00:27:48.790 01:11:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 60280e89-6760-48c1-872c-1455859344ce lvs_n_0 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=00b2c995-a726-4636-9b3c-242e4ea03e3d 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 00b2c995-a726-4636-9b3c-242e4ea03e3d 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=00b2c995-a726-4636-9b3c-242e4ea03e3d 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:27:49.726 { 00:27:49.726 "uuid": "b78c1ac7-8e9f-44c8-83e3-94572ea73eae", 00:27:49.726 "name": "lvs_0", 00:27:49.726 "base_bdev": "Nvme0n1", 00:27:49.726 "total_data_clusters": 238234, 00:27:49.726 "free_clusters": 233114, 00:27:49.726 "block_size": 512, 00:27:49.726 "cluster_size": 4194304 00:27:49.726 }, 00:27:49.726 { 00:27:49.726 "uuid": "00b2c995-a726-4636-9b3c-242e4ea03e3d", 00:27:49.726 "name": "lvs_n_0", 00:27:49.726 "base_bdev": "60280e89-6760-48c1-872c-1455859344ce", 00:27:49.726 "total_data_clusters": 5114, 00:27:49.726 "free_clusters": 5114, 00:27:49.726 "block_size": 512, 00:27:49.726 "cluster_size": 4194304 00:27:49.726 } 00:27:49.726 ]' 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="00b2c995-a726-4636-9b3c-242e4ea03e3d") .free_clusters' 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="00b2c995-a726-4636-9b3c-242e4ea03e3d") .cluster_size' 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:27:49.726 20456 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:49.726 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 00b2c995-a726-4636-9b3c-242e4ea03e3d lbd_nest_0 20456 00:27:49.985 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=6f137a70-941b-4646-abed-58899cd2519d 00:27:49.985 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:50.244 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:50.244 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6f137a70-941b-4646-abed-58899cd2519d 00:27:50.503 01:11:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:50.503 01:11:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:50.503 01:11:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:50.503 01:11:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:50.503 01:11:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:50.503 01:11:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:02.709 Initializing NVMe Controllers 00:28:02.709 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.709 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:02.709 Initialization complete. Launching workers. 00:28:02.709 ======================================================== 00:28:02.709 Latency(us) 00:28:02.709 Device Information : IOPS MiB/s Average min max 00:28:02.709 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4809.70 2.35 207.39 85.91 7066.37 00:28:02.709 ======================================================== 00:28:02.709 Total : 4809.70 2.35 207.39 85.91 7066.37 00:28:02.709 00:28:02.709 01:12:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:02.709 01:12:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:14.917 Initializing NVMe Controllers 00:28:14.917 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:14.917 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:14.917 Initialization complete. Launching workers. 00:28:14.917 ======================================================== 00:28:14.917 Latency(us) 00:28:14.917 Device Information : IOPS MiB/s Average min max 00:28:14.917 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 124.79 15.60 8018.27 4986.06 15963.71 00:28:14.917 ======================================================== 00:28:14.917 Total : 124.79 15.60 8018.27 4986.06 15963.71 00:28:14.917 00:28:14.917 01:12:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:14.917 01:12:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:14.917 01:12:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:24.895 Initializing NVMe Controllers 00:28:24.895 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.895 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:24.895 Initialization complete. Launching workers. 00:28:24.895 ======================================================== 00:28:24.895 Latency(us) 00:28:24.895 Device Information : IOPS MiB/s Average min max 00:28:24.895 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10176.50 4.97 3143.65 942.71 9006.99 00:28:24.895 ======================================================== 00:28:24.895 Total : 10176.50 4.97 3143.65 942.71 9006.99 00:28:24.895 00:28:25.154 01:12:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:25.154 01:12:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:37.372 Initializing NVMe Controllers 00:28:37.372 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.372 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:37.372 Initialization complete. Launching workers. 00:28:37.372 ======================================================== 00:28:37.372 Latency(us) 00:28:37.372 Device Information : IOPS MiB/s Average min max 00:28:37.372 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8439.80 1054.97 3790.32 663.36 24333.96 00:28:37.372 ======================================================== 00:28:37.372 Total : 8439.80 1054.97 3790.32 663.36 24333.96 00:28:37.372 00:28:37.372 01:12:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:37.372 01:12:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:37.372 01:12:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:49.588 Initializing NVMe Controllers 00:28:49.588 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.588 Controller IO queue size 128, less than required. 00:28:49.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:49.588 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:49.588 Initialization complete. Launching workers. 00:28:49.588 ======================================================== 00:28:49.588 Latency(us) 00:28:49.588 Device Information : IOPS MiB/s Average min max 00:28:49.588 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16775.11 8.19 7632.77 2409.28 15823.59 00:28:49.588 ======================================================== 00:28:49.588 Total : 16775.11 8.19 7632.77 2409.28 15823.59 00:28:49.588 00:28:49.588 01:12:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:49.588 01:12:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:59.565 Initializing NVMe Controllers 00:28:59.565 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:59.565 Controller IO queue size 128, less than required. 00:28:59.565 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.565 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:59.565 Initialization complete. Launching workers. 00:28:59.565 ======================================================== 00:28:59.565 Latency(us) 00:28:59.565 Device Information : IOPS MiB/s Average min max 00:28:59.565 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5351.50 668.94 23924.72 7950.19 107604.38 00:28:59.565 ======================================================== 00:28:59.565 Total : 5351.50 668.94 23924.72 7950.19 107604.38 00:28:59.565 00:28:59.824 01:13:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:00.082 01:13:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6f137a70-941b-4646-abed-58899cd2519d 00:29:00.649 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:00.908 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 60280e89-6760-48c1-872c-1455859344ce 00:29:01.166 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:01.426 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:01.426 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:01.426 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:01.426 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:29:01.426 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:01.426 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:01.426 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:29:01.426 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.426 01:13:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:01.426 rmmod nvme_rdma 00:29:01.426 rmmod nvme_fabrics 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 463468 ']' 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 463468 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 463468 ']' 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 463468 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463468 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463468' 00:29:01.426 killing process with pid 463468 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 463468 00:29:01.426 01:13:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 463468 00:29:03.960 01:13:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:03.960 01:13:10 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:03.960 00:29:03.960 real 1m45.085s 00:29:03.960 user 6m37.621s 00:29:03.960 sys 0m7.125s 00:29:03.960 01:13:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.960 01:13:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:03.960 ************************************ 00:29:03.960 END TEST nvmf_perf 00:29:03.960 ************************************ 00:29:03.960 01:13:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:29:03.960 01:13:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:03.960 01:13:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.960 01:13:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.960 ************************************ 00:29:03.960 START TEST nvmf_fio_host 00:29:03.960 ************************************ 00:29:03.960 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:29:04.221 * Looking for test storage... 00:29:04.221 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:04.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.221 --rc genhtml_branch_coverage=1 00:29:04.221 --rc genhtml_function_coverage=1 00:29:04.221 --rc genhtml_legend=1 00:29:04.221 --rc geninfo_all_blocks=1 00:29:04.221 --rc geninfo_unexecuted_blocks=1 00:29:04.221 00:29:04.221 ' 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:04.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.221 --rc genhtml_branch_coverage=1 00:29:04.221 --rc genhtml_function_coverage=1 00:29:04.221 --rc genhtml_legend=1 00:29:04.221 --rc geninfo_all_blocks=1 00:29:04.221 --rc geninfo_unexecuted_blocks=1 00:29:04.221 00:29:04.221 ' 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:04.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.221 --rc genhtml_branch_coverage=1 00:29:04.221 --rc genhtml_function_coverage=1 00:29:04.221 --rc genhtml_legend=1 00:29:04.221 --rc geninfo_all_blocks=1 00:29:04.221 --rc geninfo_unexecuted_blocks=1 00:29:04.221 00:29:04.221 ' 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:04.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.221 --rc genhtml_branch_coverage=1 00:29:04.221 --rc genhtml_function_coverage=1 00:29:04.221 --rc genhtml_legend=1 00:29:04.221 --rc geninfo_all_blocks=1 00:29:04.221 --rc geninfo_unexecuted_blocks=1 00:29:04.221 00:29:04.221 ' 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.221 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:04.222 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.222 01:13:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:10.791 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:10.791 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@405 -- # modinfo irdma 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:10.791 Found net devices under 0000:af:00.0: cvl_0_0 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:10.791 Found net devices under 0000:af:00.1: cvl_0_1 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:29:10.791 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo cvl_0_0 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo cvl_0_1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:29:10.792 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:10.792 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:29:10.792 altname enp175s0f0np0 00:29:10.792 altname ens801f0np0 00:29:10.792 inet 192.168.100.8/24 scope global cvl_0_0 00:29:10.792 valid_lft forever preferred_lft forever 00:29:10.792 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:29:10.792 valid_lft forever preferred_lft forever 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:29:10.792 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:10.792 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:29:10.792 altname enp175s0f1np1 00:29:10.792 altname ens801f1np1 00:29:10.792 inet 192.168.100.9/24 scope global cvl_0_1 00:29:10.792 valid_lft forever preferred_lft forever 00:29:10.792 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:29:10.792 valid_lft forever preferred_lft forever 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo cvl_0_0 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo cvl_0_1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:10.792 192.168.100.9' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:10.792 192.168.100.9' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:10.792 192.168.100.9' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:10.792 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=482876 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 482876 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 482876 ']' 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.793 01:13:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.793 [2024-11-19 01:13:16.750791] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:10.793 [2024-11-19 01:13:16.750883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.793 [2024-11-19 01:13:16.878897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.793 [2024-11-19 01:13:16.988513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.793 [2024-11-19 01:13:16.988560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.793 [2024-11-19 01:13:16.988571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.793 [2024-11-19 01:13:16.988580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.793 [2024-11-19 01:13:16.988587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.793 [2024-11-19 01:13:16.990981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.793 [2024-11-19 01:13:16.991083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.793 [2024-11-19 01:13:16.991163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.793 [2024-11-19 01:13:16.991185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.052 01:13:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.052 01:13:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:29:11.052 01:13:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:11.311 [2024-11-19 01:13:17.764207] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000292c0/0x617000007c40) succeed. 00:29:11.311 [2024-11-19 01:13:17.773750] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029440/0x617000007fc0) succeed. 00:29:11.311 [2024-11-19 01:13:17.773779] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:29:11.311 01:13:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:11.311 01:13:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:11.311 01:13:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.311 01:13:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:11.570 Malloc1 00:29:11.570 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:11.829 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:11.829 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:12.088 [2024-11-19 01:13:18.680284] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:12.088 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:12.347 01:13:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:12.607 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:12.607 fio-3.35 00:29:12.607 Starting 1 thread 00:29:15.143 00:29:15.143 test: (groupid=0, jobs=1): err= 0: pid=483468: Tue Nov 19 01:13:21 2024 00:29:15.143 read: IOPS=14.9k, BW=58.3MiB/s (61.1MB/s)(117MiB/2004msec) 00:29:15.143 slat (nsec): min=1508, max=29058, avg=1650.21, stdev=356.10 00:29:15.143 clat (usec): min=2129, max=8086, avg=4251.81, stdev=154.66 00:29:15.143 lat (usec): min=2142, max=8087, avg=4253.46, stdev=154.60 00:29:15.143 clat percentiles (usec): 00:29:15.143 | 1.00th=[ 4178], 5.00th=[ 4228], 10.00th=[ 4228], 20.00th=[ 4228], 00:29:15.143 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4228], 00:29:15.143 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4293], 00:29:15.143 | 99.00th=[ 4752], 99.50th=[ 5407], 99.90th=[ 6063], 99.95th=[ 7046], 00:29:15.143 | 99.99th=[ 7635] 00:29:15.143 bw ( KiB/s): min=57384, max=60696, per=99.97%, avg=59654.00, stdev=1556.64, samples=4 00:29:15.143 iops : min=14346, max=15174, avg=14913.50, stdev=389.16, samples=4 00:29:15.143 write: IOPS=14.9k, BW=58.3MiB/s (61.1MB/s)(117MiB/2004msec); 0 zone resets 00:29:15.143 slat (nsec): min=1552, max=31310, avg=1742.07, stdev=358.79 00:29:15.143 clat (usec): min=2136, max=8093, avg=4249.54, stdev=150.37 00:29:15.143 lat (usec): min=2149, max=8094, avg=4251.29, stdev=150.32 00:29:15.143 clat percentiles (usec): 00:29:15.143 | 1.00th=[ 4178], 5.00th=[ 4228], 10.00th=[ 4228], 20.00th=[ 4228], 00:29:15.143 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4228], 00:29:15.143 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4293], 00:29:15.143 | 99.00th=[ 4752], 99.50th=[ 5407], 99.90th=[ 5735], 99.95th=[ 6587], 00:29:15.143 | 99.99th=[ 7635] 00:29:15.143 bw ( KiB/s): min=57704, max=60664, per=100.00%, avg=59674.00, stdev=1355.45, samples=4 00:29:15.143 iops : min=14426, max=15166, avg=14918.50, stdev=338.86, samples=4 00:29:15.143 lat (msec) : 4=0.69%, 10=99.31% 00:29:15.143 cpu : usr=99.30%, sys=0.35%, ctx=18, majf=0, minf=1594 00:29:15.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:15.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:15.143 issued rwts: total=29897,29894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:15.143 00:29:15.143 Run status group 0 (all jobs): 00:29:15.143 READ: bw=58.3MiB/s (61.1MB/s), 58.3MiB/s-58.3MiB/s (61.1MB/s-61.1MB/s), io=117MiB (122MB), run=2004-2004msec 00:29:15.143 WRITE: bw=58.3MiB/s (61.1MB/s), 58.3MiB/s-58.3MiB/s (61.1MB/s-61.1MB/s), io=117MiB (122MB), run=2004-2004msec 00:29:15.711 ----------------------------------------------------- 00:29:15.711 Suppressions used: 00:29:15.711 count bytes template 00:29:15.711 1 63 /usr/src/fio/parse.c 00:29:15.711 1 8 libtcmalloc_minimal.so 00:29:15.711 ----------------------------------------------------- 00:29:15.711 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:15.711 01:13:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:15.971 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:15.971 fio-3.35 00:29:15.971 Starting 1 thread 00:29:18.507 00:29:18.507 test: (groupid=0, jobs=1): err= 0: pid=484029: Tue Nov 19 01:13:25 2024 00:29:18.507 read: IOPS=12.1k, BW=190MiB/s (199MB/s)(374MiB/1974msec) 00:29:18.507 slat (nsec): min=2533, max=42999, avg=3016.75, stdev=1122.52 00:29:18.507 clat (usec): min=718, max=8847, avg=2134.90, stdev=1393.09 00:29:18.507 lat (usec): min=720, max=8849, avg=2137.91, stdev=1393.48 00:29:18.507 clat percentiles (usec): 00:29:18.507 | 1.00th=[ 938], 5.00th=[ 1106], 10.00th=[ 1188], 20.00th=[ 1319], 00:29:18.507 | 30.00th=[ 1434], 40.00th=[ 1549], 50.00th=[ 1663], 60.00th=[ 1811], 00:29:18.507 | 70.00th=[ 2073], 80.00th=[ 2376], 90.00th=[ 4015], 95.00th=[ 5800], 00:29:18.507 | 99.00th=[ 7504], 99.50th=[ 8094], 99.90th=[ 8586], 99.95th=[ 8717], 00:29:18.507 | 99.99th=[ 8848] 00:29:18.507 bw ( KiB/s): min=92896, max=96512, per=48.99%, avg=95160.00, stdev=1567.54, samples=4 00:29:18.507 iops : min= 5806, max= 6032, avg=5947.50, stdev=97.97, samples=4 00:29:18.507 write: IOPS=6832, BW=107MiB/s (112MB/s)(193MiB/1806msec); 0 zone resets 00:29:18.507 slat (usec): min=27, max=146, avg=30.36, stdev= 4.51 00:29:18.507 clat (usec): min=5425, max=22254, avg=14425.15, stdev=2247.26 00:29:18.507 lat (usec): min=5459, max=22282, avg=14455.50, stdev=2246.74 00:29:18.507 clat percentiles (usec): 00:29:18.507 | 1.00th=[ 7373], 5.00th=[11600], 10.00th=[12256], 20.00th=[12911], 00:29:18.507 | 30.00th=[13304], 40.00th=[13566], 50.00th=[14091], 60.00th=[14615], 00:29:18.507 | 70.00th=[15401], 80.00th=[16319], 90.00th=[17433], 95.00th=[18220], 00:29:18.507 | 99.00th=[20579], 99.50th=[21103], 99.90th=[21890], 99.95th=[21890], 00:29:18.507 | 99.99th=[22152] 00:29:18.507 bw ( KiB/s): min=95744, max=100352, per=89.46%, avg=97800.00, stdev=2073.24, samples=4 00:29:18.507 iops : min= 5984, max= 6272, avg=6112.50, stdev=129.58, samples=4 00:29:18.507 lat (usec) : 750=0.01%, 1000=1.27% 00:29:18.507 lat (msec) : 2=43.51%, 4=14.62%, 10=7.40%, 20=32.66%, 50=0.54% 00:29:18.507 cpu : usr=95.66%, sys=3.74%, ctx=95, majf=0, minf=15360 00:29:18.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:29:18.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:18.507 issued rwts: total=23964,12340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:18.507 00:29:18.507 Run status group 0 (all jobs): 00:29:18.507 READ: bw=190MiB/s (199MB/s), 190MiB/s-190MiB/s (199MB/s-199MB/s), io=374MiB (393MB), run=1974-1974msec 00:29:18.507 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=193MiB (202MB), run=1806-1806msec 00:29:18.766 ----------------------------------------------------- 00:29:18.766 Suppressions used: 00:29:18.766 count bytes template 00:29:18.766 1 63 /usr/src/fio/parse.c 00:29:18.766 283 27168 /usr/src/fio/iolog.c 00:29:18.766 1 8 libtcmalloc_minimal.so 00:29:18.766 ----------------------------------------------------- 00:29:18.766 00:29:18.766 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:19.026 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:19.026 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:19.026 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:19.026 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:19.026 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:29:19.026 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:19.026 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:19.026 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:19.026 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:19.026 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:29:19.026 01:13:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 192.168.100.8 00:29:22.322 Nvme0n1 00:29:22.322 01:13:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:24.853 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d3b3368e-e0ec-4d49-9dc0-b63b22dc42b4 00:29:24.853 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d3b3368e-e0ec-4d49-9dc0-b63b22dc42b4 00:29:24.853 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=d3b3368e-e0ec-4d49-9dc0-b63b22dc42b4 00:29:24.853 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:24.853 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:29:24.853 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:29:24.853 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:25.112 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:25.112 { 00:29:25.112 "uuid": "d3b3368e-e0ec-4d49-9dc0-b63b22dc42b4", 00:29:25.112 "name": "lvs_0", 00:29:25.112 "base_bdev": "Nvme0n1", 00:29:25.112 "total_data_clusters": 930, 00:29:25.112 "free_clusters": 930, 00:29:25.112 "block_size": 512, 00:29:25.112 "cluster_size": 1073741824 00:29:25.112 } 00:29:25.112 ]' 00:29:25.112 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d3b3368e-e0ec-4d49-9dc0-b63b22dc42b4") .free_clusters' 00:29:25.112 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:29:25.113 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d3b3368e-e0ec-4d49-9dc0-b63b22dc42b4") .cluster_size' 00:29:25.372 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:29:25.372 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:29:25.372 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:29:25.372 952320 00:29:25.372 01:13:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:25.631 d1ba013a-2053-489a-a30d-3dd5fbca76cf 00:29:25.631 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:25.890 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:26.150 01:13:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:26.731 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:26.731 fio-3.35 00:29:26.731 Starting 1 thread 00:29:29.252 00:29:29.252 test: (groupid=0, jobs=1): err= 0: pid=485913: Tue Nov 19 01:13:35 2024 00:29:29.252 read: IOPS=9423, BW=36.8MiB/s (38.6MB/s)(73.8MiB/2005msec) 00:29:29.252 slat (nsec): min=1523, max=31102, avg=1698.44, stdev=367.39 00:29:29.252 clat (usec): min=481, max=169260, avg=6750.98, stdev=9573.39 00:29:29.252 lat (usec): min=483, max=169291, avg=6752.67, stdev=9573.45 00:29:29.252 clat percentiles (msec): 00:29:29.252 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 7], 00:29:29.252 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:29:29.252 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:29:29.252 | 99.00th=[ 7], 99.50th=[ 9], 99.90th=[ 169], 99.95th=[ 169], 00:29:29.252 | 99.99th=[ 169] 00:29:29.252 bw ( KiB/s): min=26176, max=41848, per=99.92%, avg=37664.00, stdev=7662.89, samples=4 00:29:29.252 iops : min= 6544, max=10462, avg=9416.00, stdev=1915.72, samples=4 00:29:29.252 write: IOPS=9426, BW=36.8MiB/s (38.6MB/s)(73.8MiB/2005msec); 0 zone resets 00:29:29.252 slat (nsec): min=1565, max=18067, avg=1760.92, stdev=323.88 00:29:29.252 clat (usec): min=170, max=169546, avg=6689.86, stdev=8938.72 00:29:29.252 lat (usec): min=171, max=169549, avg=6691.62, stdev=8938.78 00:29:29.252 clat percentiles (msec): 00:29:29.252 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 7], 00:29:29.252 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:29:29.252 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:29:29.252 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 169], 99.95th=[ 169], 00:29:29.252 | 99.99th=[ 169] 00:29:29.252 bw ( KiB/s): min=27208, max=41320, per=99.91%, avg=37670.00, stdev=6976.09, samples=4 00:29:29.252 iops : min= 6802, max=10330, avg=9417.50, stdev=1744.02, samples=4 00:29:29.252 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:29:29.252 lat (msec) : 2=0.04%, 4=0.17%, 10=99.34%, 20=0.07%, 250=0.34% 00:29:29.252 cpu : usr=99.15%, sys=0.50%, ctx=7, majf=0, minf=2135 00:29:29.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:29.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:29.252 issued rwts: total=18894,18900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:29.252 00:29:29.252 Run status group 0 (all jobs): 00:29:29.252 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.8MiB (77.4MB), run=2005-2005msec 00:29:29.252 WRITE: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.8MiB (77.4MB), run=2005-2005msec 00:29:29.252 ----------------------------------------------------- 00:29:29.252 Suppressions used: 00:29:29.252 count bytes template 00:29:29.252 1 64 /usr/src/fio/parse.c 00:29:29.252 1 8 libtcmalloc_minimal.so 00:29:29.252 ----------------------------------------------------- 00:29:29.252 00:29:29.252 01:13:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:29.509 01:13:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d8d5c280-95ea-4c36-a38c-5e3f27eebbda 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d8d5c280-95ea-4c36-a38c-5e3f27eebbda 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=d8d5c280-95ea-4c36-a38c-5e3f27eebbda 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:30.878 { 00:29:30.878 "uuid": "d3b3368e-e0ec-4d49-9dc0-b63b22dc42b4", 00:29:30.878 "name": "lvs_0", 00:29:30.878 "base_bdev": "Nvme0n1", 00:29:30.878 "total_data_clusters": 930, 00:29:30.878 "free_clusters": 0, 00:29:30.878 "block_size": 512, 00:29:30.878 "cluster_size": 1073741824 00:29:30.878 }, 00:29:30.878 { 00:29:30.878 "uuid": "d8d5c280-95ea-4c36-a38c-5e3f27eebbda", 00:29:30.878 "name": "lvs_n_0", 00:29:30.878 "base_bdev": "d1ba013a-2053-489a-a30d-3dd5fbca76cf", 00:29:30.878 "total_data_clusters": 237847, 00:29:30.878 "free_clusters": 237847, 00:29:30.878 "block_size": 512, 00:29:30.878 "cluster_size": 4194304 00:29:30.878 } 00:29:30.878 ]' 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d8d5c280-95ea-4c36-a38c-5e3f27eebbda") .free_clusters' 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d8d5c280-95ea-4c36-a38c-5e3f27eebbda") .cluster_size' 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:29:30.878 951388 00:29:30.878 01:13:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:31.811 248182c8-e519-4cf7-8ed6-15747f15f2b7 00:29:31.811 01:13:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:32.069 01:13:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:32.327 01:13:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:32.585 01:13:39 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:32.843 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:32.843 fio-3.35 00:29:32.843 Starting 1 thread 00:29:35.371 00:29:35.371 test: (groupid=0, jobs=1): err= 0: pid=486995: Tue Nov 19 01:13:41 2024 00:29:35.371 read: IOPS=8681, BW=33.9MiB/s (35.6MB/s)(68.0MiB/2006msec) 00:29:35.371 slat (nsec): min=1535, max=27309, avg=1709.60, stdev=377.53 00:29:35.371 clat (usec): min=3514, max=12697, avg=7275.91, stdev=268.99 00:29:35.371 lat (usec): min=3517, max=12699, avg=7277.62, stdev=268.96 00:29:35.371 clat percentiles (usec): 00:29:35.371 | 1.00th=[ 7111], 5.00th=[ 7177], 10.00th=[ 7177], 20.00th=[ 7242], 00:29:35.371 | 30.00th=[ 7242], 40.00th=[ 7242], 50.00th=[ 7242], 60.00th=[ 7242], 00:29:35.371 | 70.00th=[ 7308], 80.00th=[ 7308], 90.00th=[ 7308], 95.00th=[ 7373], 00:29:35.371 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[11600], 99.95th=[11731], 00:29:35.371 | 99.99th=[12649] 00:29:35.371 bw ( KiB/s): min=32574, max=35536, per=99.89%, avg=34689.50, stdev=1414.18, samples=4 00:29:35.371 iops : min= 8143, max= 8884, avg=8672.25, stdev=353.79, samples=4 00:29:35.371 write: IOPS=8672, BW=33.9MiB/s (35.5MB/s)(68.0MiB/2006msec); 0 zone resets 00:29:35.371 slat (nsec): min=1565, max=124154, avg=1782.55, stdev=987.66 00:29:35.371 clat (usec): min=3514, max=12688, avg=7300.27, stdev=263.01 00:29:35.371 lat (usec): min=3518, max=12690, avg=7302.05, stdev=262.97 00:29:35.371 clat percentiles (usec): 00:29:35.371 | 1.00th=[ 7177], 5.00th=[ 7177], 10.00th=[ 7242], 20.00th=[ 7242], 00:29:35.371 | 30.00th=[ 7242], 40.00th=[ 7242], 50.00th=[ 7242], 60.00th=[ 7308], 00:29:35.371 | 70.00th=[ 7308], 80.00th=[ 7308], 90.00th=[ 7373], 95.00th=[ 7439], 00:29:35.371 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[10814], 99.95th=[11731], 00:29:35.371 | 99.99th=[12649] 00:29:35.371 bw ( KiB/s): min=33325, max=35304, per=99.88%, avg=34649.25, stdev=895.63, samples=4 00:29:35.371 iops : min= 8331, max= 8826, avg=8662.25, stdev=224.03, samples=4 00:29:35.371 lat (msec) : 4=0.03%, 10=99.83%, 20=0.14% 00:29:35.371 cpu : usr=99.45%, sys=0.20%, ctx=22, majf=0, minf=2225 00:29:35.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:35.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:35.371 issued rwts: total=17415,17398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:35.371 00:29:35.371 Run status group 0 (all jobs): 00:29:35.371 READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.0MiB (71.3MB), run=2006-2006msec 00:29:35.371 WRITE: bw=33.9MiB/s (35.5MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.5MB/s), io=68.0MiB (71.3MB), run=2006-2006msec 00:29:35.630 ----------------------------------------------------- 00:29:35.630 Suppressions used: 00:29:35.630 count bytes template 00:29:35.630 1 64 /usr/src/fio/parse.c 00:29:35.630 1 8 libtcmalloc_minimal.so 00:29:35.630 ----------------------------------------------------- 00:29:35.630 00:29:35.630 01:13:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:35.888 01:13:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:35.888 01:13:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:40.071 01:13:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:40.071 01:13:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:43.359 01:13:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:43.359 01:13:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:45.265 rmmod nvme_rdma 00:29:45.265 rmmod nvme_fabrics 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 482876 ']' 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 482876 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 482876 ']' 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 482876 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482876 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482876' 00:29:45.265 killing process with pid 482876 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 482876 00:29:45.265 01:13:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 482876 00:29:46.644 01:13:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:46.644 01:13:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:46.644 00:29:46.644 real 0m42.363s 00:29:46.644 user 2m58.887s 00:29:46.644 sys 0m8.439s 00:29:46.644 01:13:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.644 01:13:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.644 ************************************ 00:29:46.644 END TEST nvmf_fio_host 00:29:46.644 ************************************ 00:29:46.644 01:13:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:29:46.644 01:13:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:46.644 01:13:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.645 ************************************ 00:29:46.645 START TEST nvmf_failover 00:29:46.645 ************************************ 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:29:46.645 * Looking for test storage... 00:29:46.645 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:46.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.645 --rc genhtml_branch_coverage=1 00:29:46.645 --rc genhtml_function_coverage=1 00:29:46.645 --rc genhtml_legend=1 00:29:46.645 --rc geninfo_all_blocks=1 00:29:46.645 --rc geninfo_unexecuted_blocks=1 00:29:46.645 00:29:46.645 ' 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:46.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.645 --rc genhtml_branch_coverage=1 00:29:46.645 --rc genhtml_function_coverage=1 00:29:46.645 --rc genhtml_legend=1 00:29:46.645 --rc geninfo_all_blocks=1 00:29:46.645 --rc geninfo_unexecuted_blocks=1 00:29:46.645 00:29:46.645 ' 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:46.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.645 --rc genhtml_branch_coverage=1 00:29:46.645 --rc genhtml_function_coverage=1 00:29:46.645 --rc genhtml_legend=1 00:29:46.645 --rc geninfo_all_blocks=1 00:29:46.645 --rc geninfo_unexecuted_blocks=1 00:29:46.645 00:29:46.645 ' 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:46.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.645 --rc genhtml_branch_coverage=1 00:29:46.645 --rc genhtml_function_coverage=1 00:29:46.645 --rc genhtml_legend=1 00:29:46.645 --rc geninfo_all_blocks=1 00:29:46.645 --rc geninfo_unexecuted_blocks=1 00:29:46.645 00:29:46.645 ' 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.645 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:46.646 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:29:46.646 01:13:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.215 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:53.216 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:53.216 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@405 -- # modinfo irdma 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:53.216 Found net devices under 0000:af:00.0: cvl_0_0 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:53.216 Found net devices under 0000:af:00.1: cvl_0_1 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo cvl_0_0 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo cvl_0_1 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:53.216 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:29:53.216 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:53.216 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:29:53.216 altname enp175s0f0np0 00:29:53.216 altname ens801f0np0 00:29:53.216 inet 192.168.100.8/24 scope global cvl_0_0 00:29:53.216 valid_lft forever preferred_lft forever 00:29:53.217 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:29:53.217 valid_lft forever preferred_lft forever 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:29:53.217 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:53.217 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:29:53.217 altname enp175s0f1np1 00:29:53.217 altname ens801f1np1 00:29:53.217 inet 192.168.100.9/24 scope global cvl_0_1 00:29:53.217 valid_lft forever preferred_lft forever 00:29:53.217 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:29:53.217 valid_lft forever preferred_lft forever 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo cvl_0_0 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo cvl_0_1 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:53.217 192.168.100.9' 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:53.217 192.168.100.9' 00:29:53.217 01:13:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:53.217 192.168.100.9' 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=492107 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 492107 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 492107 ']' 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.217 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:53.217 [2024-11-19 01:13:59.122524] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:53.217 [2024-11-19 01:13:59.122615] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.217 [2024-11-19 01:13:59.249278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:53.217 [2024-11-19 01:13:59.355867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.217 [2024-11-19 01:13:59.355913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.217 [2024-11-19 01:13:59.355925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.217 [2024-11-19 01:13:59.355951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.217 [2024-11-19 01:13:59.355959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.217 [2024-11-19 01:13:59.358284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.217 [2024-11-19 01:13:59.358359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.217 [2024-11-19 01:13:59.358379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.475 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.475 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:29:53.475 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.475 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.475 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:53.475 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.475 01:13:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:53.476 [2024-11-19 01:14:00.149596] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000028fc0/0x617000007c40) succeed. 00:29:53.476 [2024-11-19 01:14:00.159178] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029140/0x617000007fc0) succeed. 00:29:53.476 [2024-11-19 01:14:00.159207] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:29:53.733 01:14:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:53.992 Malloc0 00:29:53.992 01:14:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.992 01:14:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:54.250 01:14:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:54.508 [2024-11-19 01:14:01.010504] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:54.508 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:29:54.508 [2024-11-19 01:14:01.195099] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:54.765 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:29:54.765 [2024-11-19 01:14:01.383786] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:29:54.765 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=492544 00:29:54.766 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:54.766 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.766 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 492544 /var/tmp/bdevperf.sock 00:29:54.766 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 492544 ']' 00:29:54.766 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:54.766 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.766 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:54.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:54.766 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.766 01:14:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:55.700 01:14:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:55.700 01:14:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:29:55.700 01:14:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:55.958 NVMe0n1 00:29:55.958 01:14:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:56.216 00:29:56.216 01:14:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=492772 00:29:56.216 01:14:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:56.216 01:14:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:57.151 01:14:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:57.409 01:14:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:00.694 01:14:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:00.694 00:30:00.694 01:14:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:00.953 01:14:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:04.236 01:14:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:04.236 [2024-11-19 01:14:10.701556] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:04.236 01:14:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:05.170 01:14:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:05.429 01:14:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 492772 00:30:11.988 { 00:30:11.988 "results": [ 00:30:11.988 { 00:30:11.988 "job": "NVMe0n1", 00:30:11.988 "core_mask": "0x1", 00:30:11.988 "workload": "verify", 00:30:11.988 "status": "finished", 00:30:11.988 "verify_range": { 00:30:11.988 "start": 0, 00:30:11.988 "length": 16384 00:30:11.988 }, 00:30:11.988 "queue_depth": 128, 00:30:11.988 "io_size": 4096, 00:30:11.988 "runtime": 15.004995, 00:30:11.988 "iops": 13521.69727480749, 00:30:11.988 "mibps": 52.819129979716756, 00:30:11.988 "io_failed": 4157, 00:30:11.988 "io_timeout": 0, 00:30:11.988 "avg_latency_us": 9249.739583013075, 00:30:11.988 "min_latency_us": 511.02476190476193, 00:30:11.988 "max_latency_us": 587202.56 00:30:11.988 } 00:30:11.989 ], 00:30:11.989 "core_count": 1 00:30:11.989 } 00:30:11.989 01:14:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 492544 00:30:11.989 01:14:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 492544 ']' 00:30:11.989 01:14:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 492544 00:30:11.989 01:14:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:11.989 01:14:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.989 01:14:17 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492544 00:30:11.989 01:14:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:11.989 01:14:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:11.989 01:14:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492544' 00:30:11.989 killing process with pid 492544 00:30:11.989 01:14:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 492544 00:30:11.989 01:14:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 492544 00:30:12.561 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:12.561 [2024-11-19 01:14:01.483646] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:12.561 [2024-11-19 01:14:01.483735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492544 ] 00:30:12.561 [2024-11-19 01:14:01.607818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.561 [2024-11-19 01:14:01.721081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.561 Running I/O for 15 seconds... 00:30:12.561 15360.00 IOPS, 60.00 MiB/s [2024-11-19T00:14:19.254Z] [2024-11-19 01:14:04.564327] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:30:12.561 [2024-11-19 01:14:04.564397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0xc4737f4a 00:30:12.561 [2024-11-19 01:14:04.564414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.561 [2024-11-19 01:14:04.564444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0xc4737f4a 00:30:12.561 [2024-11-19 01:14:04.564456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.561 [2024-11-19 01:14:04.564473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0xc4737f4a 00:30:12.561 [2024-11-19 01:14:04.564483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.564989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.564999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.565022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.565045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.565069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.565093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.565118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.565141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.565165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.565188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0xc4737f4a 00:30:12.562 [2024-11-19 01:14:04.565213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.562 [2024-11-19 01:14:04.565237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.562 [2024-11-19 01:14:04.565262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.562 [2024-11-19 01:14:04.565275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.565979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.565992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.566002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.566015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.566025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.566038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.566047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.566062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.566072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.566084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.566094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.563 [2024-11-19 01:14:04.566107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.563 [2024-11-19 01:14:04.566119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.564 [2024-11-19 01:14:04.566937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.564 [2024-11-19 01:14:04.566950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.566960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.566973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.566982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.566997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.567424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:04.567434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.568002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:12.565 [2024-11-19 01:14:04.568021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:12.565 [2024-11-19 01:14:04.568033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2800 len:8 PRP1 0x0 PRP2 0x0 00:30:12.565 [2024-11-19 01:14:04.568045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:04.568232] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:30:12.565 [2024-11-19 01:14:04.568247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:12.565 [2024-11-19 01:14:04.571351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:12.565 [2024-11-19 01:14:04.571409] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:30:12.565 [2024-11-19 01:14:04.604347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:12.565 [2024-11-19 01:14:04.646237] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:30:12.565 10432.50 IOPS, 40.75 MiB/s [2024-11-19T00:14:19.258Z] 12091.00 IOPS, 47.23 MiB/s [2024-11-19T00:14:19.258Z] 12919.75 IOPS, 50.47 MiB/s [2024-11-19T00:14:19.258Z] 12001.00 IOPS, 46.88 MiB/s [2024-11-19T00:14:19.258Z] [2024-11-19 01:14:08.019358] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:30:12.565 [2024-11-19 01:14:08.019419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x4ba22479 00:30:12.565 [2024-11-19 01:14:08.019437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:08.019467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x4ba22479 00:30:12.565 [2024-11-19 01:14:08.019480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:08.019494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:08.019506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:08.019519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:08.019531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:08.019543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:08.019554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:08.019566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:08.019578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:08.019590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:08.019601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:08.019613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:08.019626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.565 [2024-11-19 01:14:08.019639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.565 [2024-11-19 01:14:08.019650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.566 [2024-11-19 01:14:08.019675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.019699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.019722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.019747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.019772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.019796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.019821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.019846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.019870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.566 [2024-11-19 01:14:08.019893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.566 [2024-11-19 01:14:08.019917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.566 [2024-11-19 01:14:08.019939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.566 [2024-11-19 01:14:08.019963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.566 [2024-11-19 01:14:08.019988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.019999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.566 [2024-11-19 01:14:08.020012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.020024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.566 [2024-11-19 01:14:08.020035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.020047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.566 [2024-11-19 01:14:08.020059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.020072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.020083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.020095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.020106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.020118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.020130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.020142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.020153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.020165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.020176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.020188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.020201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.020213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.020225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.566 [2024-11-19 01:14:08.020237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x4ba22479 00:30:12.566 [2024-11-19 01:14:08.020248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.020825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.020977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.020989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.021001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.567 [2024-11-19 01:14:08.021012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.021024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.021036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.567 [2024-11-19 01:14:08.021048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x4ba22479 00:30:12.567 [2024-11-19 01:14:08.021060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.568 [2024-11-19 01:14:08.021765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.568 [2024-11-19 01:14:08.021846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x4ba22479 00:30:12.568 [2024-11-19 01:14:08.021857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.021868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.021880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.021891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.021905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.021916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.021928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.021946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.021958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.021969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.021985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.021997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.022168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.022190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.022211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.022232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.022253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.022274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.022300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x4ba22479 00:30:12.569 [2024-11-19 01:14:08.022325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.569 [2024-11-19 01:14:08.022431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.022986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:12.569 [2024-11-19 01:14:08.023004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:12.569 [2024-11-19 01:14:08.023014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98712 len:8 PRP1 0x0 PRP2 0x0 00:30:12.569 [2024-11-19 01:14:08.023025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.023216] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:30:12.569 [2024-11-19 01:14:08.023230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:12.569 [2024-11-19 01:14:08.023269] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:30:12.569 [2024-11-19 01:14:08.023286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.569 [2024-11-19 01:14:08.023304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.023317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.569 [2024-11-19 01:14:08.023327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.023337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.569 [2024-11-19 01:14:08.023346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.023356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:12.569 [2024-11-19 01:14:08.023369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.569 [2024-11-19 01:14:08.055710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:30:12.569 [2024-11-19 01:14:08.055732] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] already in failed state 00:30:12.569 [2024-11-19 01:14:08.055746] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Unable to perform failover, already in progress. 00:30:12.569 [2024-11-19 01:14:08.058790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:12.569 [2024-11-19 01:14:08.105125] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:30:12.569 12154.17 IOPS, 47.48 MiB/s [2024-11-19T00:14:19.263Z] 12648.71 IOPS, 49.41 MiB/s [2024-11-19T00:14:19.263Z] 13001.38 IOPS, 50.79 MiB/s [2024-11-19T00:14:19.263Z] 13244.11 IOPS, 51.73 MiB/s [2024-11-19T00:14:19.263Z] [2024-11-19 01:14:12.499332] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:30:12.570 [2024-11-19 01:14:12.499393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.570 [2024-11-19 01:14:12.499888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.499984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x571b9cfe 00:30:12.570 [2024-11-19 01:14:12.499993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.570 [2024-11-19 01:14:12.500006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.571 [2024-11-19 01:14:12.500394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x571b9cfe 00:30:12.571 [2024-11-19 01:14:12.500404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x571b9cfe 00:30:12.572 [2024-11-19 01:14:12.500424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.572 [2024-11-19 01:14:12.500716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.572 [2024-11-19 01:14:12.500727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.573 [2024-11-19 01:14:12.500737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.573 [2024-11-19 01:14:12.500748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.573 [2024-11-19 01:14:12.500757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.573 [2024-11-19 01:14:12.500769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.573 [2024-11-19 01:14:12.500778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.573 [2024-11-19 01:14:12.500789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.573 [2024-11-19 01:14:12.500799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.573 [2024-11-19 01:14:12.500810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x571b9cfe 00:30:12.573 [2024-11-19 01:14:12.500820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.573 [2024-11-19 01:14:12.500833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x571b9cfe 00:30:12.573 [2024-11-19 01:14:12.500842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.573 [2024-11-19 01:14:12.500854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x571b9cfe 00:30:12.573 [2024-11-19 01:14:12.500864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.573 [2024-11-19 01:14:12.500877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x571b9cfe 00:30:12.573 [2024-11-19 01:14:12.500886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.573 [2024-11-19 01:14:12.500899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x571b9cfe 00:30:12.573 [2024-11-19 01:14:12.500910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.573 [2024-11-19 01:14:12.500922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x571b9cfe 00:30:12.573 [2024-11-19 01:14:12.500931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.574 [2024-11-19 01:14:12.500943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x571b9cfe 00:30:12.574 [2024-11-19 01:14:12.500953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.574 [2024-11-19 01:14:12.500964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x571b9cfe 00:30:12.574 [2024-11-19 01:14:12.500974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.574 [2024-11-19 01:14:12.500985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x571b9cfe 00:30:12.574 [2024-11-19 01:14:12.500994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.574 [2024-11-19 01:14:12.501006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x571b9cfe 00:30:12.574 [2024-11-19 01:14:12.501015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.574 [2024-11-19 01:14:12.501027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x571b9cfe 00:30:12.574 [2024-11-19 01:14:12.501036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.574 [2024-11-19 01:14:12.501047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x571b9cfe 00:30:12.574 [2024-11-19 01:14:12.501057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.574 [2024-11-19 01:14:12.501068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x571b9cfe 00:30:12.574 [2024-11-19 01:14:12.501077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.574 [2024-11-19 01:14:12.501088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x571b9cfe 00:30:12.574 [2024-11-19 01:14:12.501098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.574 [2024-11-19 01:14:12.501109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x571b9cfe 00:30:12.574 [2024-11-19 01:14:12.501119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.574 [2024-11-19 01:14:12.501133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x571b9cfe 00:30:12.574 [2024-11-19 01:14:12.501143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.574 [2024-11-19 01:14:12.501155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x571b9cfe 00:30:12.575 [2024-11-19 01:14:12.501165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.575 [2024-11-19 01:14:12.501177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.575 [2024-11-19 01:14:12.501186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.575 [2024-11-19 01:14:12.501198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.575 [2024-11-19 01:14:12.501208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.575 [2024-11-19 01:14:12.501219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.575 [2024-11-19 01:14:12.501228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.575 [2024-11-19 01:14:12.501240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.575 [2024-11-19 01:14:12.501249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.575 [2024-11-19 01:14:12.501261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.575 [2024-11-19 01:14:12.501271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.575 [2024-11-19 01:14:12.501282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.575 [2024-11-19 01:14:12.501291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.575 [2024-11-19 01:14:12.501307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.575 [2024-11-19 01:14:12.501317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.575 [2024-11-19 01:14:12.501328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.575 [2024-11-19 01:14:12.501338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.575 [2024-11-19 01:14:12.501354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x571b9cfe 00:30:12.575 [2024-11-19 01:14:12.501363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.575 [2024-11-19 01:14:12.501376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x571b9cfe 00:30:12.575 [2024-11-19 01:14:12.501386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.575 [2024-11-19 01:14:12.501398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x571b9cfe 00:30:12.575 [2024-11-19 01:14:12.501407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.576 [2024-11-19 01:14:12.501421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x571b9cfe 00:30:12.576 [2024-11-19 01:14:12.501431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.576 [2024-11-19 01:14:12.501442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x571b9cfe 00:30:12.576 [2024-11-19 01:14:12.501452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.576 [2024-11-19 01:14:12.501464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x571b9cfe 00:30:12.576 [2024-11-19 01:14:12.501473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.576 [2024-11-19 01:14:12.501489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x571b9cfe 00:30:12.576 [2024-11-19 01:14:12.501499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.576 [2024-11-19 01:14:12.501511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x571b9cfe 00:30:12.576 [2024-11-19 01:14:12.501520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.576 [2024-11-19 01:14:12.501532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x571b9cfe 00:30:12.576 [2024-11-19 01:14:12.501542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.576 [2024-11-19 01:14:12.501553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x571b9cfe 00:30:12.576 [2024-11-19 01:14:12.501563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.576 [2024-11-19 01:14:12.501575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x571b9cfe 00:30:12.576 [2024-11-19 01:14:12.501584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.576 [2024-11-19 01:14:12.501596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x571b9cfe 00:30:12.576 [2024-11-19 01:14:12.501605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.577 [2024-11-19 01:14:12.501617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x571b9cfe 00:30:12.577 [2024-11-19 01:14:12.501626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.577 [2024-11-19 01:14:12.501638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x571b9cfe 00:30:12.577 [2024-11-19 01:14:12.501647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.577 [2024-11-19 01:14:12.501659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x571b9cfe 00:30:12.577 [2024-11-19 01:14:12.501669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.577 [2024-11-19 01:14:12.501681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.577 [2024-11-19 01:14:12.501690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.577 [2024-11-19 01:14:12.501703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.577 [2024-11-19 01:14:12.501712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.577 [2024-11-19 01:14:12.501723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.577 [2024-11-19 01:14:12.501733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.577 [2024-11-19 01:14:12.501745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.577 [2024-11-19 01:14:12.501754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.577 [2024-11-19 01:14:12.501766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.577 [2024-11-19 01:14:12.501775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.578 [2024-11-19 01:14:12.501786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.578 [2024-11-19 01:14:12.501796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.578 [2024-11-19 01:14:12.501807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.578 [2024-11-19 01:14:12.501816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.578 [2024-11-19 01:14:12.501828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.578 [2024-11-19 01:14:12.501837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.578 [2024-11-19 01:14:12.501848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.578 [2024-11-19 01:14:12.501858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.578 [2024-11-19 01:14:12.501869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.578 [2024-11-19 01:14:12.501878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.578 [2024-11-19 01:14:12.501890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.578 [2024-11-19 01:14:12.501899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.578 [2024-11-19 01:14:12.501910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.578 [2024-11-19 01:14:12.501919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.578 [2024-11-19 01:14:12.501932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.578 [2024-11-19 01:14:12.501942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.579 [2024-11-19 01:14:12.501954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.579 [2024-11-19 01:14:12.501963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.579 [2024-11-19 01:14:12.501974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.579 [2024-11-19 01:14:12.501984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.579 [2024-11-19 01:14:12.501995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.579 [2024-11-19 01:14:12.502004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.579 [2024-11-19 01:14:12.502015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.579 [2024-11-19 01:14:12.502025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.579 [2024-11-19 01:14:12.502037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.579 [2024-11-19 01:14:12.502047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.579 [2024-11-19 01:14:12.502058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.579 [2024-11-19 01:14:12.502067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.580 [2024-11-19 01:14:12.502078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.580 [2024-11-19 01:14:12.502088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.580 [2024-11-19 01:14:12.502099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.580 [2024-11-19 01:14:12.502109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.580 [2024-11-19 01:14:12.502120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.580 [2024-11-19 01:14:12.502129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.580 [2024-11-19 01:14:12.502140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:12.580 [2024-11-19 01:14:12.502150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.580 [2024-11-19 01:14:12.502716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:12.581 [2024-11-19 01:14:12.502736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:12.581 [2024-11-19 01:14:12.502747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60432 len:8 PRP1 0x0 PRP2 0x0 00:30:12.581 [2024-11-19 01:14:12.502759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.581 [2024-11-19 01:14:12.502971] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:30:12.581 [2024-11-19 01:14:12.502985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:12.581 [2024-11-19 01:14:12.506052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:12.581 [2024-11-19 01:14:12.506113] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:30:12.581 [2024-11-19 01:14:12.538953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:30:12.581 [2024-11-19 01:14:12.578007] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:30:12.581 12490.50 IOPS, 48.79 MiB/s [2024-11-19T00:14:19.274Z] 12770.64 IOPS, 49.89 MiB/s [2024-11-19T00:14:19.275Z] 13004.00 IOPS, 50.80 MiB/s [2024-11-19T00:14:19.275Z] 13202.85 IOPS, 51.57 MiB/s [2024-11-19T00:14:19.275Z] 13373.36 IOPS, 52.24 MiB/s [2024-11-19T00:14:19.275Z] 13520.87 IOPS, 52.82 MiB/s 00:30:12.582 Latency(us) 00:30:12.582 [2024-11-19T00:14:19.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.582 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:12.582 Verification LBA range: start 0x0 length 0x4000 00:30:12.582 NVMe0n1 : 15.00 13521.70 52.82 277.04 0.00 9249.74 511.02 587202.56 00:30:12.582 [2024-11-19T00:14:19.275Z] =================================================================================================================== 00:30:12.582 [2024-11-19T00:14:19.275Z] Total : 13521.70 52.82 277.04 0.00 9249.74 511.02 587202.56 00:30:12.582 Received shutdown signal, test time was about 15.000000 seconds 00:30:12.582 00:30:12.582 Latency(us) 00:30:12.582 [2024-11-19T00:14:19.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.583 [2024-11-19T00:14:19.276Z] =================================================================================================================== 00:30:12.583 [2024-11-19T00:14:19.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=495468 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 495468 /var/tmp/bdevperf.sock 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 495468 ']' 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:12.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.583 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:13.527 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.527 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:13.527 01:14:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:13.527 [2024-11-19 01:14:20.075709] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:13.528 01:14:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:13.786 [2024-11-19 01:14:20.284523] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:30:13.786 01:14:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:14.044 NVMe0n1 00:30:14.044 01:14:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:14.302 00:30:14.302 01:14:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:14.560 00:30:14.560 01:14:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:14.560 01:14:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:14.819 01:14:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:14.819 01:14:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:18.101 01:14:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:18.101 01:14:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:18.101 01:14:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=496341 00:30:18.101 01:14:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:18.101 01:14:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 496341 00:30:19.477 { 00:30:19.477 "results": [ 00:30:19.477 { 00:30:19.477 "job": "NVMe0n1", 00:30:19.477 "core_mask": "0x1", 00:30:19.477 "workload": "verify", 00:30:19.477 "status": "finished", 00:30:19.477 "verify_range": { 00:30:19.477 "start": 0, 00:30:19.477 "length": 16384 00:30:19.477 }, 00:30:19.477 "queue_depth": 128, 00:30:19.477 "io_size": 4096, 00:30:19.477 "runtime": 1.01154, 00:30:19.477 "iops": 15311.307511319374, 00:30:19.477 "mibps": 59.8097949660913, 00:30:19.477 "io_failed": 0, 00:30:19.477 "io_timeout": 0, 00:30:19.477 "avg_latency_us": 8312.371381345927, 00:30:19.477 "min_latency_us": 2949.12, 00:30:19.477 "max_latency_us": 21845.333333333332 00:30:19.477 } 00:30:19.477 ], 00:30:19.477 "core_count": 1 00:30:19.477 } 00:30:19.478 01:14:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:19.478 [2024-11-19 01:14:19.108622] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:19.478 [2024-11-19 01:14:19.108708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495468 ] 00:30:19.478 [2024-11-19 01:14:19.245605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.478 [2024-11-19 01:14:19.357553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.478 [2024-11-19 01:14:21.437348] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:30:19.478 [2024-11-19 01:14:21.438690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:30:19.478 [2024-11-19 01:14:21.438750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:30:19.478 [2024-11-19 01:14:21.482221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:30:19.478 [2024-11-19 01:14:21.503086] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:30:19.478 Running I/O for 1 seconds... 00:30:19.478 15317.00 IOPS, 59.83 MiB/s 00:30:19.478 Latency(us) 00:30:19.478 [2024-11-19T00:14:26.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.478 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:19.478 Verification LBA range: start 0x0 length 0x4000 00:30:19.478 NVMe0n1 : 1.01 15311.31 59.81 0.00 0.00 8312.37 2949.12 21845.33 00:30:19.478 [2024-11-19T00:14:26.171Z] =================================================================================================================== 00:30:19.478 [2024-11-19T00:14:26.171Z] Total : 15311.31 59.81 0.00 0.00 8312.37 2949.12 21845.33 00:30:19.478 01:14:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:19.478 01:14:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:19.478 01:14:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.736 01:14:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:19.736 01:14:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:19.736 01:14:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.994 01:14:26 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 495468 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 495468 ']' 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 495468 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 495468 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 495468' 00:30:23.277 killing process with pid 495468 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 495468 00:30:23.277 01:14:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 495468 00:30:24.211 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:24.211 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.470 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:24.470 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:24.470 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:24.470 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:24.470 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:30:24.470 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:24.470 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:24.470 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:30:24.470 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.470 01:14:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:24.470 rmmod nvme_rdma 00:30:24.470 rmmod nvme_fabrics 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 492107 ']' 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 492107 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 492107 ']' 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 492107 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492107 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492107' 00:30:24.470 killing process with pid 492107 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 492107 00:30:24.470 01:14:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 492107 00:30:25.847 01:14:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:25.847 01:14:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:25.847 00:30:25.847 real 0m39.388s 00:30:25.847 user 2m14.106s 00:30:25.847 sys 0m6.618s 00:30:25.847 01:14:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.847 01:14:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:25.847 ************************************ 00:30:25.847 END TEST nvmf_failover 00:30:25.847 ************************************ 00:30:25.847 01:14:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:25.847 01:14:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:25.847 01:14:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.847 01:14:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.847 ************************************ 00:30:25.847 START TEST nvmf_host_discovery 00:30:25.847 ************************************ 00:30:25.847 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:26.108 * Looking for test storage... 00:30:26.108 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:26.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.108 --rc genhtml_branch_coverage=1 00:30:26.108 --rc genhtml_function_coverage=1 00:30:26.108 --rc genhtml_legend=1 00:30:26.108 --rc geninfo_all_blocks=1 00:30:26.108 --rc geninfo_unexecuted_blocks=1 00:30:26.108 00:30:26.108 ' 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:26.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.108 --rc genhtml_branch_coverage=1 00:30:26.108 --rc genhtml_function_coverage=1 00:30:26.108 --rc genhtml_legend=1 00:30:26.108 --rc geninfo_all_blocks=1 00:30:26.108 --rc geninfo_unexecuted_blocks=1 00:30:26.108 00:30:26.108 ' 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:26.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.108 --rc genhtml_branch_coverage=1 00:30:26.108 --rc genhtml_function_coverage=1 00:30:26.108 --rc genhtml_legend=1 00:30:26.108 --rc geninfo_all_blocks=1 00:30:26.108 --rc geninfo_unexecuted_blocks=1 00:30:26.108 00:30:26.108 ' 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:26.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.108 --rc genhtml_branch_coverage=1 00:30:26.108 --rc genhtml_function_coverage=1 00:30:26.108 --rc genhtml_legend=1 00:30:26.108 --rc geninfo_all_blocks=1 00:30:26.108 --rc geninfo_unexecuted_blocks=1 00:30:26.108 00:30:26.108 ' 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.108 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:26.109 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:26.109 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:30:26.109 00:30:26.109 real 0m0.212s 00:30:26.109 user 0m0.135s 00:30:26.109 sys 0m0.091s 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.109 ************************************ 00:30:26.109 END TEST nvmf_host_discovery 00:30:26.109 ************************************ 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.109 ************************************ 00:30:26.109 START TEST nvmf_host_multipath_status 00:30:26.109 ************************************ 00:30:26.109 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:26.369 * Looking for test storage... 00:30:26.369 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.369 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:26.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.370 --rc genhtml_branch_coverage=1 00:30:26.370 --rc genhtml_function_coverage=1 00:30:26.370 --rc genhtml_legend=1 00:30:26.370 --rc geninfo_all_blocks=1 00:30:26.370 --rc geninfo_unexecuted_blocks=1 00:30:26.370 00:30:26.370 ' 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:26.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.370 --rc genhtml_branch_coverage=1 00:30:26.370 --rc genhtml_function_coverage=1 00:30:26.370 --rc genhtml_legend=1 00:30:26.370 --rc geninfo_all_blocks=1 00:30:26.370 --rc geninfo_unexecuted_blocks=1 00:30:26.370 00:30:26.370 ' 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:26.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.370 --rc genhtml_branch_coverage=1 00:30:26.370 --rc genhtml_function_coverage=1 00:30:26.370 --rc genhtml_legend=1 00:30:26.370 --rc geninfo_all_blocks=1 00:30:26.370 --rc geninfo_unexecuted_blocks=1 00:30:26.370 00:30:26.370 ' 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:26.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.370 --rc genhtml_branch_coverage=1 00:30:26.370 --rc genhtml_function_coverage=1 00:30:26.370 --rc genhtml_legend=1 00:30:26.370 --rc geninfo_all_blocks=1 00:30:26.370 --rc geninfo_unexecuted_blocks=1 00:30:26.370 00:30:26.370 ' 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:26.370 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:26.370 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/bpftrace.sh 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.371 01:14:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:32.940 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:32.940 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@405 -- # modinfo irdma 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:32.940 Found net devices under 0000:af:00.0: cvl_0_0 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:32.940 Found net devices under 0000:af:00.1: cvl_0_1 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:32.940 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo cvl_0_0 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo cvl_0_1 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:30:32.941 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:30:32.941 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:30:32.941 altname enp175s0f0np0 00:30:32.941 altname ens801f0np0 00:30:32.941 inet 192.168.100.8/24 scope global cvl_0_0 00:30:32.941 valid_lft forever preferred_lft forever 00:30:32.941 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:30:32.941 valid_lft forever preferred_lft forever 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:30:32.941 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:30:32.941 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:30:32.941 altname enp175s0f1np1 00:30:32.941 altname ens801f1np1 00:30:32.941 inet 192.168.100.9/24 scope global cvl_0_1 00:30:32.941 valid_lft forever preferred_lft forever 00:30:32.941 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:30:32.941 valid_lft forever preferred_lft forever 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo cvl_0_0 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo cvl_0_1 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:30:32.941 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:32.942 192.168.100.9' 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:32.942 192.168.100.9' 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:32.942 192.168.100.9' 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=500633 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 500633 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 500633 ']' 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.942 01:14:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:32.942 [2024-11-19 01:14:38.847124] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:32.942 [2024-11-19 01:14:38.847214] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.942 [2024-11-19 01:14:38.973292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:32.942 [2024-11-19 01:14:39.079453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.942 [2024-11-19 01:14:39.079504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.942 [2024-11-19 01:14:39.079514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.942 [2024-11-19 01:14:39.079541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.942 [2024-11-19 01:14:39.079550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.942 [2024-11-19 01:14:39.081736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.942 [2024-11-19 01:14:39.081758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.201 01:14:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.201 01:14:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:30:33.201 01:14:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:33.201 01:14:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:33.201 01:14:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:33.201 01:14:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.201 01:14:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=500633 00:30:33.201 01:14:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:33.201 [2024-11-19 01:14:39.870489] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000028cc0/0x617000007c40) succeed. 00:30:33.201 [2024-11-19 01:14:39.881158] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000028e40/0x617000007fc0) succeed. 00:30:33.201 [2024-11-19 01:14:39.881185] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:30:33.459 01:14:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:33.717 Malloc0 00:30:33.717 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:33.717 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:33.976 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:34.234 [2024-11-19 01:14:40.779093] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:34.234 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:34.493 [2024-11-19 01:14:40.959676] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:34.493 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=501103 00:30:34.493 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:34.493 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:34.493 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 501103 /var/tmp/bdevperf.sock 00:30:34.493 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 501103 ']' 00:30:34.493 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:34.493 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.493 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:34.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:34.493 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.493 01:14:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:35.429 01:14:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.429 01:14:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:30:35.429 01:14:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:35.429 01:14:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:35.688 Nvme0n1 00:30:35.688 01:14:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:35.946 Nvme0n1 00:30:36.205 01:14:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:36.205 01:14:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:38.108 01:14:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:38.108 01:14:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:30:38.367 01:14:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:38.626 01:14:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:39.562 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:39.562 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:39.562 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.562 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:39.821 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.821 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:39.822 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.822 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:39.822 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:39.822 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:39.822 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.822 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:40.080 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.080 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:40.080 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.080 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:40.339 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.339 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:40.339 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.339 01:14:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:40.598 01:14:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.598 01:14:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:40.598 01:14:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.598 01:14:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:40.598 01:14:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.598 01:14:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:40.598 01:14:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:40.857 01:14:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:41.115 01:14:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:42.051 01:14:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:42.051 01:14:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:42.051 01:14:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.051 01:14:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:42.310 01:14:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.310 01:14:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:42.310 01:14:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.310 01:14:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:42.569 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.569 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:42.569 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.569 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:42.569 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.569 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:42.827 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.827 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:42.827 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.827 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:42.827 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:42.827 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.085 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.085 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:43.085 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.085 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:43.343 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.343 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:43.343 01:14:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:43.601 01:14:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:30:43.601 01:14:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.976 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:45.235 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.235 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:45.235 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.235 01:14:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:45.494 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.494 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:45.494 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.494 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:45.753 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.753 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:45.753 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.753 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:46.012 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.012 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:46.012 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:46.012 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:30:46.271 01:14:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:47.208 01:14:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:47.208 01:14:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:47.208 01:14:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.208 01:14:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:47.466 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.466 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:47.466 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.466 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:47.726 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:47.726 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:47.726 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:47.726 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.984 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.984 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:47.984 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.984 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:48.243 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.243 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:48.243 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.243 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:48.243 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.243 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:48.243 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.243 01:14:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:48.502 01:14:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:48.502 01:14:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:48.502 01:14:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:30:48.761 01:14:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:30:49.020 01:14:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:49.956 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:49.956 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:49.956 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.956 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:50.214 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:50.214 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:50.214 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.214 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:50.214 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:50.214 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:50.214 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:50.214 01:14:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.472 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.472 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:50.472 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.472 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:50.730 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.730 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:50.730 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.730 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:50.988 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:50.988 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:50.988 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:50.988 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.988 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:50.988 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:50.988 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:30:51.246 01:14:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:51.504 01:14:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:52.439 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:52.439 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:52.439 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.439 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:52.705 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:52.705 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:52.705 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.705 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:52.963 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.963 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:52.963 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.963 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:53.223 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.223 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:53.223 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.223 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:53.223 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.223 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:53.223 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.223 01:14:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:53.481 01:15:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:53.482 01:15:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:53.482 01:15:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.482 01:15:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:53.740 01:15:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.740 01:15:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:53.999 01:15:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:53.999 01:15:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:30:53.999 01:15:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:54.258 01:15:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:55.193 01:15:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:55.193 01:15:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:55.193 01:15:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.193 01:15:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:55.451 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.451 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:55.451 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.451 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:55.709 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.709 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:55.709 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:55.709 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.969 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.969 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:55.969 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.969 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:56.228 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.228 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:56.228 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.228 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:56.228 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.228 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:56.228 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.228 01:15:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:56.486 01:15:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.486 01:15:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:56.486 01:15:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:56.745 01:15:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:57.004 01:15:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:57.941 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:57.941 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:57.941 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.941 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:58.200 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:58.200 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:58.200 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.200 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:58.200 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.200 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:58.200 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.200 01:15:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:58.459 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.459 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:58.459 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.459 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:58.718 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.718 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:58.718 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.718 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:58.978 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.978 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:58.978 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.978 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:59.237 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.237 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:59.237 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:59.237 01:15:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:30:59.495 01:15:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:00.432 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:00.432 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:00.432 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.432 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:00.691 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.691 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:00.691 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.691 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:00.950 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.950 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:00.950 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.950 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:01.208 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.208 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:01.208 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.208 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:01.468 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.468 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:01.468 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.468 01:15:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:01.468 01:15:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.468 01:15:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:01.468 01:15:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:01.468 01:15:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.726 01:15:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.726 01:15:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:01.726 01:15:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:01.985 01:15:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:31:02.244 01:15:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:03.178 01:15:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:03.178 01:15:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:03.178 01:15:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.178 01:15:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:03.438 01:15:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.438 01:15:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:03.438 01:15:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.438 01:15:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:03.697 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:03.697 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:03.697 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:03.697 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.697 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.697 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:03.697 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.697 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:03.956 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.956 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:03.956 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.956 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:04.216 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.216 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:04.216 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:04.216 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 501103 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 501103 ']' 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 501103 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501103 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501103' 00:31:04.475 killing process with pid 501103 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 501103 00:31:04.475 01:15:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 501103 00:31:04.475 { 00:31:04.475 "results": [ 00:31:04.475 { 00:31:04.475 "job": "Nvme0n1", 00:31:04.475 "core_mask": "0x4", 00:31:04.475 "workload": "verify", 00:31:04.475 "status": "terminated", 00:31:04.475 "verify_range": { 00:31:04.475 "start": 0, 00:31:04.475 "length": 16384 00:31:04.475 }, 00:31:04.475 "queue_depth": 128, 00:31:04.475 "io_size": 4096, 00:31:04.475 "runtime": 28.197488, 00:31:04.475 "iops": 13993.693338924375, 00:31:04.475 "mibps": 54.66286460517334, 00:31:04.475 "io_failed": 0, 00:31:04.475 "io_timeout": 0, 00:31:04.475 "avg_latency_us": 9124.802384844334, 00:31:04.475 "min_latency_us": 125.80571428571429, 00:31:04.475 "max_latency_us": 3019898.88 00:31:04.475 } 00:31:04.475 ], 00:31:04.475 "core_count": 1 00:31:04.475 } 00:31:05.414 01:15:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 501103 00:31:05.414 01:15:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:05.414 [2024-11-19 01:14:41.059269] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:05.414 [2024-11-19 01:14:41.059381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501103 ] 00:31:05.414 [2024-11-19 01:14:41.182286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.414 [2024-11-19 01:14:41.293326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:05.414 Running I/O for 90 seconds... 00:31:05.414 16002.00 IOPS, 62.51 MiB/s [2024-11-19T00:15:12.107Z] 16192.00 IOPS, 63.25 MiB/s [2024-11-19T00:15:12.107Z] 16235.00 IOPS, 63.42 MiB/s [2024-11-19T00:15:12.107Z] 16228.00 IOPS, 63.39 MiB/s [2024-11-19T00:15:12.107Z] 16231.80 IOPS, 63.41 MiB/s [2024-11-19T00:15:12.107Z] 16268.67 IOPS, 63.55 MiB/s [2024-11-19T00:15:12.107Z] 16243.43 IOPS, 63.45 MiB/s [2024-11-19T00:15:12.107Z] 16208.88 IOPS, 63.32 MiB/s [2024-11-19T00:15:12.107Z] 16185.56 IOPS, 63.22 MiB/s [2024-11-19T00:15:12.107Z] 16164.60 IOPS, 63.14 MiB/s [2024-11-19T00:15:12.107Z] 16151.91 IOPS, 63.09 MiB/s [2024-11-19T00:15:12.107Z] 16137.50 IOPS, 63.04 MiB/s [2024-11-19T00:15:12.107Z] [2024-11-19 01:14:55.257447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.414 [2024-11-19 01:14:55.257506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:05.414 [2024-11-19 01:14:55.257573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.414 [2024-11-19 01:14:55.257589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:05.414 [2024-11-19 01:14:55.257604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.414 [2024-11-19 01:14:55.257617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:05.414 [2024-11-19 01:14:55.257631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.414 [2024-11-19 01:14:55.257642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:05.414 [2024-11-19 01:14:55.257655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.414 [2024-11-19 01:14:55.257670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:05.414 [2024-11-19 01:14:55.257684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.414 [2024-11-19 01:14:55.257696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:05.414 [2024-11-19 01:14:55.257709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.414 [2024-11-19 01:14:55.257720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:05.414 [2024-11-19 01:14:55.257734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.414 [2024-11-19 01:14:55.257746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:05.414 [2024-11-19 01:14:55.257760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.414 [2024-11-19 01:14:55.257773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:05.414 [2024-11-19 01:14:55.257786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.414 [2024-11-19 01:14:55.257802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:05.414 [2024-11-19 01:14:55.257816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.414 [2024-11-19 01:14:55.257827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:05.414 [2024-11-19 01:14:55.257840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.257852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.257865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.257878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.257892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.257904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.257917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.257928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.257942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.257953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.257966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.257978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.257992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:05.415 [2024-11-19 01:14:55.258704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.415 [2024-11-19 01:14:55.258715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.258730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0xa2a515cf 00:31:05.416 [2024-11-19 01:14:55.258742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.258756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.258767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.258780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.258791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.258805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.258816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.258829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.258840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.258853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.258865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.258878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.258891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.258904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.258915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.258928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.258939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.258952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.258963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.258976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.258988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.259847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.259858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.260177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.260191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.260210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.260221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.260238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.260250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.260266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.260277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.260299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.260314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.260330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.260342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.260359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.260370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.260387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.416 [2024-11-19 01:14:55.260398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:05.416 [2024-11-19 01:14:55.260415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0xa2a515cf 00:31:05.417 [2024-11-19 01:14:55.260545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.260981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.260998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.417 [2024-11-19 01:14:55.261394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:05.417 [2024-11-19 01:14:55.261410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0xa2a515cf 00:31:05.418 [2024-11-19 01:14:55.261481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:14:55.261766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:14:55.261784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0xa2a515cf 00:31:05.418 [2024-11-19 01:14:55.261795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:05.418 15518.00 IOPS, 60.62 MiB/s [2024-11-19T00:15:12.111Z] 14409.57 IOPS, 56.29 MiB/s [2024-11-19T00:15:12.111Z] 13448.93 IOPS, 52.53 MiB/s [2024-11-19T00:15:12.111Z] 13109.06 IOPS, 51.21 MiB/s [2024-11-19T00:15:12.111Z] 13294.18 IOPS, 51.93 MiB/s [2024-11-19T00:15:12.111Z] 13430.61 IOPS, 52.46 MiB/s [2024-11-19T00:15:12.111Z] 13462.47 IOPS, 52.59 MiB/s [2024-11-19T00:15:12.111Z] 13486.40 IOPS, 52.68 MiB/s [2024-11-19T00:15:12.111Z] 13561.19 IOPS, 52.97 MiB/s [2024-11-19T00:15:12.111Z] 13683.95 IOPS, 53.45 MiB/s [2024-11-19T00:15:12.111Z] 13793.61 IOPS, 53.88 MiB/s [2024-11-19T00:15:12.111Z] 13827.46 IOPS, 54.01 MiB/s [2024-11-19T00:15:12.111Z] 13825.96 IOPS, 54.01 MiB/s [2024-11-19T00:15:12.111Z] [2024-11-19 01:15:08.717943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0xa2a515cf 00:31:05.418 [2024-11-19 01:15:08.717997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:15:08.718609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:15:08.718645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0xa2a515cf 00:31:05.418 [2024-11-19 01:15:08.718673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0xa2a515cf 00:31:05.418 [2024-11-19 01:15:08.718704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:15:08.718729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0xa2a515cf 00:31:05.418 [2024-11-19 01:15:08.718755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:15:08.718780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:15:08.718804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0xa2a515cf 00:31:05.418 [2024-11-19 01:15:08.718829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:15:08.718854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0xa2a515cf 00:31:05.418 [2024-11-19 01:15:08.718879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0xa2a515cf 00:31:05.418 [2024-11-19 01:15:08.718906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:15:08.718931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.418 [2024-11-19 01:15:08.718956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0xa2a515cf 00:31:05.418 [2024-11-19 01:15:08.718981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.718994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0xa2a515cf 00:31:05.418 [2024-11-19 01:15:08.719007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:05.418 [2024-11-19 01:15:08.719020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.719948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0xa2a515cf 00:31:05.419 [2024-11-19 01:15:08.719974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:05.419 [2024-11-19 01:15:08.719987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.419 [2024-11-19 01:15:08.720000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0xa2a515cf 00:31:05.420 [2024-11-19 01:15:08.720025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0xa2a515cf 00:31:05.420 [2024-11-19 01:15:08.720076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0xa2a515cf 00:31:05.420 [2024-11-19 01:15:08.720176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0xa2a515cf 00:31:05.420 [2024-11-19 01:15:08.720279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0xa2a515cf 00:31:05.420 [2024-11-19 01:15:08.720335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0xa2a515cf 00:31:05.420 [2024-11-19 01:15:08.720384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0xa2a515cf 00:31:05.420 [2024-11-19 01:15:08.720412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:05.420 [2024-11-19 01:15:08.720479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.420 [2024-11-19 01:15:08.720491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:05.420 13827.96 IOPS, 54.02 MiB/s [2024-11-19T00:15:12.113Z] 13914.33 IOPS, 54.35 MiB/s [2024-11-19T00:15:12.113Z] 13986.04 IOPS, 54.63 MiB/s [2024-11-19T00:15:12.113Z] Received shutdown signal, test time was about 28.198173 seconds 00:31:05.420 00:31:05.420 Latency(us) 00:31:05.420 [2024-11-19T00:15:12.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.420 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:05.420 Verification LBA range: start 0x0 length 0x4000 00:31:05.420 Nvme0n1 : 28.20 13993.69 54.66 0.00 0.00 9124.80 125.81 3019898.88 00:31:05.420 [2024-11-19T00:15:12.113Z] =================================================================================================================== 00:31:05.420 [2024-11-19T00:15:12.113Z] Total : 13993.69 54.66 0.00 0.00 9124.80 125.81 3019898.88 00:31:05.420 01:15:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:05.679 rmmod nvme_rdma 00:31:05.679 rmmod nvme_fabrics 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 500633 ']' 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 500633 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 500633 ']' 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 500633 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 500633 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 500633' 00:31:05.679 killing process with pid 500633 00:31:05.679 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 500633 00:31:05.680 01:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 500633 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:07.057 00:31:07.057 real 0m40.774s 00:31:07.057 user 1m58.064s 00:31:07.057 sys 0m7.963s 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:07.057 ************************************ 00:31:07.057 END TEST nvmf_host_multipath_status 00:31:07.057 ************************************ 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.057 ************************************ 00:31:07.057 START TEST nvmf_discovery_remove_ifc 00:31:07.057 ************************************ 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:31:07.057 * Looking for test storage... 00:31:07.057 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:31:07.057 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:07.317 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:07.317 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:07.317 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:07.317 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:07.317 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:31:07.317 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:31:07.317 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:31:07.317 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:31:07.317 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:31:07.317 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:07.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.318 --rc genhtml_branch_coverage=1 00:31:07.318 --rc genhtml_function_coverage=1 00:31:07.318 --rc genhtml_legend=1 00:31:07.318 --rc geninfo_all_blocks=1 00:31:07.318 --rc geninfo_unexecuted_blocks=1 00:31:07.318 00:31:07.318 ' 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:07.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.318 --rc genhtml_branch_coverage=1 00:31:07.318 --rc genhtml_function_coverage=1 00:31:07.318 --rc genhtml_legend=1 00:31:07.318 --rc geninfo_all_blocks=1 00:31:07.318 --rc geninfo_unexecuted_blocks=1 00:31:07.318 00:31:07.318 ' 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:07.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.318 --rc genhtml_branch_coverage=1 00:31:07.318 --rc genhtml_function_coverage=1 00:31:07.318 --rc genhtml_legend=1 00:31:07.318 --rc geninfo_all_blocks=1 00:31:07.318 --rc geninfo_unexecuted_blocks=1 00:31:07.318 00:31:07.318 ' 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:07.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.318 --rc genhtml_branch_coverage=1 00:31:07.318 --rc genhtml_function_coverage=1 00:31:07.318 --rc genhtml_legend=1 00:31:07.318 --rc geninfo_all_blocks=1 00:31:07.318 --rc geninfo_unexecuted_blocks=1 00:31:07.318 00:31:07.318 ' 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:07.318 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:07.318 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:31:07.319 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:31:07.319 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:31:07.319 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:31:07.319 00:31:07.319 real 0m0.208s 00:31:07.319 user 0m0.135s 00:31:07.319 sys 0m0.087s 00:31:07.319 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.319 01:15:13 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.319 ************************************ 00:31:07.319 END TEST nvmf_discovery_remove_ifc 00:31:07.319 ************************************ 00:31:07.319 01:15:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:31:07.319 01:15:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:07.319 01:15:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:07.319 01:15:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.319 ************************************ 00:31:07.319 START TEST nvmf_identify_kernel_target 00:31:07.319 ************************************ 00:31:07.319 01:15:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:31:07.319 * Looking for test storage... 00:31:07.319 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:31:07.319 01:15:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:07.319 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:07.319 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:07.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.579 --rc genhtml_branch_coverage=1 00:31:07.579 --rc genhtml_function_coverage=1 00:31:07.579 --rc genhtml_legend=1 00:31:07.579 --rc geninfo_all_blocks=1 00:31:07.579 --rc geninfo_unexecuted_blocks=1 00:31:07.579 00:31:07.579 ' 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:07.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.579 --rc genhtml_branch_coverage=1 00:31:07.579 --rc genhtml_function_coverage=1 00:31:07.579 --rc genhtml_legend=1 00:31:07.579 --rc geninfo_all_blocks=1 00:31:07.579 --rc geninfo_unexecuted_blocks=1 00:31:07.579 00:31:07.579 ' 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:07.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.579 --rc genhtml_branch_coverage=1 00:31:07.579 --rc genhtml_function_coverage=1 00:31:07.579 --rc genhtml_legend=1 00:31:07.579 --rc geninfo_all_blocks=1 00:31:07.579 --rc geninfo_unexecuted_blocks=1 00:31:07.579 00:31:07.579 ' 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:07.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.579 --rc genhtml_branch_coverage=1 00:31:07.579 --rc genhtml_function_coverage=1 00:31:07.579 --rc genhtml_legend=1 00:31:07.579 --rc geninfo_all_blocks=1 00:31:07.579 --rc geninfo_unexecuted_blocks=1 00:31:07.579 00:31:07.579 ' 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.579 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:07.580 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:07.580 01:15:14 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:14.152 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:14.152 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@405 -- # modinfo irdma 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.152 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:14.152 Found net devices under 0000:af:00.0: cvl_0_0 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:14.153 Found net devices under 0000:af:00.1: cvl_0_1 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:31:14.153 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:31:14.153 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:31:14.153 altname enp175s0f0np0 00:31:14.153 altname ens801f0np0 00:31:14.153 inet 192.168.100.8/24 scope global cvl_0_0 00:31:14.153 valid_lft forever preferred_lft forever 00:31:14.153 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:31:14.153 valid_lft forever preferred_lft forever 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:31:14.153 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:31:14.153 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:31:14.153 altname enp175s0f1np1 00:31:14.153 altname ens801f1np1 00:31:14.153 inet 192.168.100.9/24 scope global cvl_0_1 00:31:14.153 valid_lft forever preferred_lft forever 00:31:14.153 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:31:14.153 valid_lft forever preferred_lft forever 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:14.153 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo cvl_0_0 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo cvl_0_1 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:14.154 192.168.100.9' 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:14.154 192.168.100.9' 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:14.154 192.168.100.9' 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:14.154 01:15:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:31:16.062 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:31:16.321 Waiting for block devices as requested 00:31:16.321 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:16.321 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:16.580 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:16.580 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:16.580 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:16.840 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:16.840 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:16.840 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:17.099 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:17.099 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:17.099 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:17.358 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:17.358 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:17.358 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:17.358 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:17.618 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:17.618 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:17.618 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:17.618 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:17.618 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:17.618 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:17.618 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:17.618 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:17.618 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:17.618 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:17.618 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:17.618 No valid GPT data, bailing 00:31:17.618 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:31:17.878 No valid GPT data, bailing 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # continue 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:31:17.878 00:31:17.878 Discovery Log Number of Records 2, Generation counter 2 00:31:17.878 =====Discovery Log Entry 0====== 00:31:17.878 trtype: rdma 00:31:17.878 adrfam: ipv4 00:31:17.878 subtype: current discovery subsystem 00:31:17.878 treq: not specified, sq flow control disable supported 00:31:17.878 portid: 1 00:31:17.878 trsvcid: 4420 00:31:17.878 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:17.878 traddr: 192.168.100.8 00:31:17.878 eflags: none 00:31:17.878 rdma_prtype: not specified 00:31:17.878 rdma_qptype: connected 00:31:17.878 rdma_cms: rdma-cm 00:31:17.878 rdma_pkey: 0x0000 00:31:17.878 =====Discovery Log Entry 1====== 00:31:17.878 trtype: rdma 00:31:17.878 adrfam: ipv4 00:31:17.878 subtype: nvme subsystem 00:31:17.878 treq: not specified, sq flow control disable supported 00:31:17.878 portid: 1 00:31:17.878 trsvcid: 4420 00:31:17.878 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:17.878 traddr: 192.168.100.8 00:31:17.878 eflags: none 00:31:17.878 rdma_prtype: not specified 00:31:17.878 rdma_qptype: connected 00:31:17.878 rdma_cms: rdma-cm 00:31:17.878 rdma_pkey: 0x0000 00:31:17.878 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:31:17.878 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:18.138 ===================================================== 00:31:18.138 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:18.138 ===================================================== 00:31:18.138 Controller Capabilities/Features 00:31:18.138 ================================ 00:31:18.138 Vendor ID: 0000 00:31:18.138 Subsystem Vendor ID: 0000 00:31:18.138 Serial Number: b384c89ff10b531b41e5 00:31:18.138 Model Number: Linux 00:31:18.138 Firmware Version: 6.8.9-20 00:31:18.138 Recommended Arb Burst: 0 00:31:18.138 IEEE OUI Identifier: 00 00 00 00:31:18.138 Multi-path I/O 00:31:18.138 May have multiple subsystem ports: No 00:31:18.138 May have multiple controllers: No 00:31:18.138 Associated with SR-IOV VF: No 00:31:18.138 Max Data Transfer Size: Unlimited 00:31:18.138 Max Number of Namespaces: 0 00:31:18.138 Max Number of I/O Queues: 1024 00:31:18.138 NVMe Specification Version (VS): 1.3 00:31:18.138 NVMe Specification Version (Identify): 1.3 00:31:18.138 Maximum Queue Entries: 128 00:31:18.138 Contiguous Queues Required: No 00:31:18.138 Arbitration Mechanisms Supported 00:31:18.138 Weighted Round Robin: Not Supported 00:31:18.138 Vendor Specific: Not Supported 00:31:18.138 Reset Timeout: 7500 ms 00:31:18.138 Doorbell Stride: 4 bytes 00:31:18.138 NVM Subsystem Reset: Not Supported 00:31:18.138 Command Sets Supported 00:31:18.138 NVM Command Set: Supported 00:31:18.138 Boot Partition: Not Supported 00:31:18.138 Memory Page Size Minimum: 4096 bytes 00:31:18.138 Memory Page Size Maximum: 4096 bytes 00:31:18.138 Persistent Memory Region: Not Supported 00:31:18.138 Optional Asynchronous Events Supported 00:31:18.138 Namespace Attribute Notices: Not Supported 00:31:18.138 Firmware Activation Notices: Not Supported 00:31:18.138 ANA Change Notices: Not Supported 00:31:18.138 PLE Aggregate Log Change Notices: Not Supported 00:31:18.138 LBA Status Info Alert Notices: Not Supported 00:31:18.138 EGE Aggregate Log Change Notices: Not Supported 00:31:18.138 Normal NVM Subsystem Shutdown event: Not Supported 00:31:18.138 Zone Descriptor Change Notices: Not Supported 00:31:18.138 Discovery Log Change Notices: Supported 00:31:18.138 Controller Attributes 00:31:18.138 128-bit Host Identifier: Not Supported 00:31:18.138 Non-Operational Permissive Mode: Not Supported 00:31:18.138 NVM Sets: Not Supported 00:31:18.138 Read Recovery Levels: Not Supported 00:31:18.138 Endurance Groups: Not Supported 00:31:18.138 Predictable Latency Mode: Not Supported 00:31:18.138 Traffic Based Keep ALive: Not Supported 00:31:18.138 Namespace Granularity: Not Supported 00:31:18.138 SQ Associations: Not Supported 00:31:18.138 UUID List: Not Supported 00:31:18.138 Multi-Domain Subsystem: Not Supported 00:31:18.138 Fixed Capacity Management: Not Supported 00:31:18.138 Variable Capacity Management: Not Supported 00:31:18.138 Delete Endurance Group: Not Supported 00:31:18.138 Delete NVM Set: Not Supported 00:31:18.138 Extended LBA Formats Supported: Not Supported 00:31:18.138 Flexible Data Placement Supported: Not Supported 00:31:18.138 00:31:18.138 Controller Memory Buffer Support 00:31:18.138 ================================ 00:31:18.138 Supported: No 00:31:18.138 00:31:18.138 Persistent Memory Region Support 00:31:18.138 ================================ 00:31:18.138 Supported: No 00:31:18.138 00:31:18.138 Admin Command Set Attributes 00:31:18.138 ============================ 00:31:18.138 Security Send/Receive: Not Supported 00:31:18.138 Format NVM: Not Supported 00:31:18.138 Firmware Activate/Download: Not Supported 00:31:18.138 Namespace Management: Not Supported 00:31:18.138 Device Self-Test: Not Supported 00:31:18.138 Directives: Not Supported 00:31:18.138 NVMe-MI: Not Supported 00:31:18.138 Virtualization Management: Not Supported 00:31:18.138 Doorbell Buffer Config: Not Supported 00:31:18.138 Get LBA Status Capability: Not Supported 00:31:18.138 Command & Feature Lockdown Capability: Not Supported 00:31:18.138 Abort Command Limit: 1 00:31:18.138 Async Event Request Limit: 1 00:31:18.138 Number of Firmware Slots: N/A 00:31:18.138 Firmware Slot 1 Read-Only: N/A 00:31:18.138 Firmware Activation Without Reset: N/A 00:31:18.139 Multiple Update Detection Support: N/A 00:31:18.139 Firmware Update Granularity: No Information Provided 00:31:18.139 Per-Namespace SMART Log: No 00:31:18.139 Asymmetric Namespace Access Log Page: Not Supported 00:31:18.139 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:18.139 Command Effects Log Page: Not Supported 00:31:18.139 Get Log Page Extended Data: Supported 00:31:18.139 Telemetry Log Pages: Not Supported 00:31:18.139 Persistent Event Log Pages: Not Supported 00:31:18.139 Supported Log Pages Log Page: May Support 00:31:18.139 Commands Supported & Effects Log Page: Not Supported 00:31:18.139 Feature Identifiers & Effects Log Page:May Support 00:31:18.139 NVMe-MI Commands & Effects Log Page: May Support 00:31:18.139 Data Area 4 for Telemetry Log: Not Supported 00:31:18.139 Error Log Page Entries Supported: 1 00:31:18.139 Keep Alive: Not Supported 00:31:18.139 00:31:18.139 NVM Command Set Attributes 00:31:18.139 ========================== 00:31:18.139 Submission Queue Entry Size 00:31:18.139 Max: 1 00:31:18.139 Min: 1 00:31:18.139 Completion Queue Entry Size 00:31:18.139 Max: 1 00:31:18.139 Min: 1 00:31:18.139 Number of Namespaces: 0 00:31:18.139 Compare Command: Not Supported 00:31:18.139 Write Uncorrectable Command: Not Supported 00:31:18.139 Dataset Management Command: Not Supported 00:31:18.139 Write Zeroes Command: Not Supported 00:31:18.139 Set Features Save Field: Not Supported 00:31:18.139 Reservations: Not Supported 00:31:18.139 Timestamp: Not Supported 00:31:18.139 Copy: Not Supported 00:31:18.139 Volatile Write Cache: Not Present 00:31:18.139 Atomic Write Unit (Normal): 1 00:31:18.139 Atomic Write Unit (PFail): 1 00:31:18.139 Atomic Compare & Write Unit: 1 00:31:18.139 Fused Compare & Write: Not Supported 00:31:18.139 Scatter-Gather List 00:31:18.139 SGL Command Set: Supported 00:31:18.139 SGL Keyed: Supported 00:31:18.139 SGL Bit Bucket Descriptor: Not Supported 00:31:18.139 SGL Metadata Pointer: Not Supported 00:31:18.139 Oversized SGL: Not Supported 00:31:18.139 SGL Metadata Address: Not Supported 00:31:18.139 SGL Offset: Supported 00:31:18.139 Transport SGL Data Block: Not Supported 00:31:18.139 Replay Protected Memory Block: Not Supported 00:31:18.139 00:31:18.139 Firmware Slot Information 00:31:18.139 ========================= 00:31:18.139 Active slot: 0 00:31:18.139 00:31:18.139 00:31:18.139 Error Log 00:31:18.139 ========= 00:31:18.139 00:31:18.139 Active Namespaces 00:31:18.139 ================= 00:31:18.139 Discovery Log Page 00:31:18.139 ================== 00:31:18.139 Generation Counter: 2 00:31:18.139 Number of Records: 2 00:31:18.139 Record Format: 0 00:31:18.139 00:31:18.139 Discovery Log Entry 0 00:31:18.139 ---------------------- 00:31:18.139 Transport Type: 1 (RDMA) 00:31:18.139 Address Family: 1 (IPv4) 00:31:18.139 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:18.139 Entry Flags: 00:31:18.139 Duplicate Returned Information: 0 00:31:18.139 Explicit Persistent Connection Support for Discovery: 0 00:31:18.139 Transport Requirements: 00:31:18.139 Secure Channel: Not Specified 00:31:18.139 Port ID: 1 (0x0001) 00:31:18.139 Controller ID: 65535 (0xffff) 00:31:18.139 Admin Max SQ Size: 32 00:31:18.139 Transport Service Identifier: 4420 00:31:18.139 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:18.139 Transport Address: 192.168.100.8 00:31:18.139 Transport Specific Address Subtype - RDMA 00:31:18.139 RDMA QP Service Type: 1 (Reliable Connected) 00:31:18.139 RDMA Provider Type: 1 (No provider specified) 00:31:18.139 RDMA CM Service: 1 (RDMA_CM) 00:31:18.139 Discovery Log Entry 1 00:31:18.139 ---------------------- 00:31:18.139 Transport Type: 1 (RDMA) 00:31:18.139 Address Family: 1 (IPv4) 00:31:18.139 Subsystem Type: 2 (NVM Subsystem) 00:31:18.139 Entry Flags: 00:31:18.139 Duplicate Returned Information: 0 00:31:18.139 Explicit Persistent Connection Support for Discovery: 0 00:31:18.139 Transport Requirements: 00:31:18.139 Secure Channel: Not Specified 00:31:18.139 Port ID: 1 (0x0001) 00:31:18.139 Controller ID: 65535 (0xffff) 00:31:18.139 Admin Max SQ Size: 32 00:31:18.139 Transport Service Identifier: 4420 00:31:18.139 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:18.139 Transport Address: 192.168.100.8 00:31:18.139 Transport Specific Address Subtype - RDMA 00:31:18.139 RDMA QP Service Type: 1 (Reliable Connected) 00:31:18.139 RDMA Provider Type: 1 (No provider specified) 00:31:18.139 RDMA CM Service: 1 (RDMA_CM) 00:31:18.139 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:18.400 get_feature(0x01) failed 00:31:18.400 get_feature(0x02) failed 00:31:18.400 get_feature(0x04) failed 00:31:18.400 ===================================================== 00:31:18.400 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:31:18.400 ===================================================== 00:31:18.400 Controller Capabilities/Features 00:31:18.400 ================================ 00:31:18.400 Vendor ID: 0000 00:31:18.400 Subsystem Vendor ID: 0000 00:31:18.400 Serial Number: a00a4401f6d7f1307de4 00:31:18.400 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:18.400 Firmware Version: 6.8.9-20 00:31:18.400 Recommended Arb Burst: 6 00:31:18.400 IEEE OUI Identifier: 00 00 00 00:31:18.400 Multi-path I/O 00:31:18.400 May have multiple subsystem ports: Yes 00:31:18.400 May have multiple controllers: Yes 00:31:18.400 Associated with SR-IOV VF: No 00:31:18.400 Max Data Transfer Size: 1048576 00:31:18.400 Max Number of Namespaces: 1024 00:31:18.400 Max Number of I/O Queues: 128 00:31:18.400 NVMe Specification Version (VS): 1.3 00:31:18.400 NVMe Specification Version (Identify): 1.3 00:31:18.400 Maximum Queue Entries: 128 00:31:18.400 Contiguous Queues Required: No 00:31:18.400 Arbitration Mechanisms Supported 00:31:18.400 Weighted Round Robin: Not Supported 00:31:18.400 Vendor Specific: Not Supported 00:31:18.400 Reset Timeout: 7500 ms 00:31:18.400 Doorbell Stride: 4 bytes 00:31:18.400 NVM Subsystem Reset: Not Supported 00:31:18.400 Command Sets Supported 00:31:18.400 NVM Command Set: Supported 00:31:18.400 Boot Partition: Not Supported 00:31:18.400 Memory Page Size Minimum: 4096 bytes 00:31:18.400 Memory Page Size Maximum: 4096 bytes 00:31:18.400 Persistent Memory Region: Not Supported 00:31:18.400 Optional Asynchronous Events Supported 00:31:18.400 Namespace Attribute Notices: Supported 00:31:18.400 Firmware Activation Notices: Not Supported 00:31:18.400 ANA Change Notices: Supported 00:31:18.400 PLE Aggregate Log Change Notices: Not Supported 00:31:18.400 LBA Status Info Alert Notices: Not Supported 00:31:18.400 EGE Aggregate Log Change Notices: Not Supported 00:31:18.400 Normal NVM Subsystem Shutdown event: Not Supported 00:31:18.400 Zone Descriptor Change Notices: Not Supported 00:31:18.400 Discovery Log Change Notices: Not Supported 00:31:18.400 Controller Attributes 00:31:18.400 128-bit Host Identifier: Supported 00:31:18.400 Non-Operational Permissive Mode: Not Supported 00:31:18.400 NVM Sets: Not Supported 00:31:18.400 Read Recovery Levels: Not Supported 00:31:18.400 Endurance Groups: Not Supported 00:31:18.400 Predictable Latency Mode: Not Supported 00:31:18.400 Traffic Based Keep ALive: Supported 00:31:18.400 Namespace Granularity: Not Supported 00:31:18.400 SQ Associations: Not Supported 00:31:18.400 UUID List: Not Supported 00:31:18.400 Multi-Domain Subsystem: Not Supported 00:31:18.400 Fixed Capacity Management: Not Supported 00:31:18.400 Variable Capacity Management: Not Supported 00:31:18.400 Delete Endurance Group: Not Supported 00:31:18.400 Delete NVM Set: Not Supported 00:31:18.400 Extended LBA Formats Supported: Not Supported 00:31:18.400 Flexible Data Placement Supported: Not Supported 00:31:18.400 00:31:18.400 Controller Memory Buffer Support 00:31:18.400 ================================ 00:31:18.400 Supported: No 00:31:18.400 00:31:18.400 Persistent Memory Region Support 00:31:18.400 ================================ 00:31:18.400 Supported: No 00:31:18.400 00:31:18.400 Admin Command Set Attributes 00:31:18.400 ============================ 00:31:18.400 Security Send/Receive: Not Supported 00:31:18.400 Format NVM: Not Supported 00:31:18.400 Firmware Activate/Download: Not Supported 00:31:18.400 Namespace Management: Not Supported 00:31:18.400 Device Self-Test: Not Supported 00:31:18.400 Directives: Not Supported 00:31:18.400 NVMe-MI: Not Supported 00:31:18.400 Virtualization Management: Not Supported 00:31:18.400 Doorbell Buffer Config: Not Supported 00:31:18.400 Get LBA Status Capability: Not Supported 00:31:18.400 Command & Feature Lockdown Capability: Not Supported 00:31:18.400 Abort Command Limit: 4 00:31:18.400 Async Event Request Limit: 4 00:31:18.400 Number of Firmware Slots: N/A 00:31:18.400 Firmware Slot 1 Read-Only: N/A 00:31:18.400 Firmware Activation Without Reset: N/A 00:31:18.400 Multiple Update Detection Support: N/A 00:31:18.400 Firmware Update Granularity: No Information Provided 00:31:18.400 Per-Namespace SMART Log: Yes 00:31:18.400 Asymmetric Namespace Access Log Page: Supported 00:31:18.400 ANA Transition Time : 10 sec 00:31:18.400 00:31:18.400 Asymmetric Namespace Access Capabilities 00:31:18.400 ANA Optimized State : Supported 00:31:18.400 ANA Non-Optimized State : Supported 00:31:18.400 ANA Inaccessible State : Supported 00:31:18.400 ANA Persistent Loss State : Supported 00:31:18.400 ANA Change State : Supported 00:31:18.400 ANAGRPID is not changed : No 00:31:18.400 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:18.400 00:31:18.400 ANA Group Identifier Maximum : 128 00:31:18.400 Number of ANA Group Identifiers : 128 00:31:18.400 Max Number of Allowed Namespaces : 1024 00:31:18.400 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:18.400 Command Effects Log Page: Supported 00:31:18.400 Get Log Page Extended Data: Supported 00:31:18.401 Telemetry Log Pages: Not Supported 00:31:18.401 Persistent Event Log Pages: Not Supported 00:31:18.401 Supported Log Pages Log Page: May Support 00:31:18.401 Commands Supported & Effects Log Page: Not Supported 00:31:18.401 Feature Identifiers & Effects Log Page:May Support 00:31:18.401 NVMe-MI Commands & Effects Log Page: May Support 00:31:18.401 Data Area 4 for Telemetry Log: Not Supported 00:31:18.401 Error Log Page Entries Supported: 128 00:31:18.401 Keep Alive: Supported 00:31:18.401 Keep Alive Granularity: 1000 ms 00:31:18.401 00:31:18.401 NVM Command Set Attributes 00:31:18.401 ========================== 00:31:18.401 Submission Queue Entry Size 00:31:18.401 Max: 64 00:31:18.401 Min: 64 00:31:18.401 Completion Queue Entry Size 00:31:18.401 Max: 16 00:31:18.401 Min: 16 00:31:18.401 Number of Namespaces: 1024 00:31:18.401 Compare Command: Not Supported 00:31:18.401 Write Uncorrectable Command: Not Supported 00:31:18.401 Dataset Management Command: Supported 00:31:18.401 Write Zeroes Command: Supported 00:31:18.401 Set Features Save Field: Not Supported 00:31:18.401 Reservations: Not Supported 00:31:18.401 Timestamp: Not Supported 00:31:18.401 Copy: Not Supported 00:31:18.401 Volatile Write Cache: Present 00:31:18.401 Atomic Write Unit (Normal): 1 00:31:18.401 Atomic Write Unit (PFail): 1 00:31:18.401 Atomic Compare & Write Unit: 1 00:31:18.401 Fused Compare & Write: Not Supported 00:31:18.401 Scatter-Gather List 00:31:18.401 SGL Command Set: Supported 00:31:18.401 SGL Keyed: Supported 00:31:18.401 SGL Bit Bucket Descriptor: Not Supported 00:31:18.401 SGL Metadata Pointer: Not Supported 00:31:18.401 Oversized SGL: Not Supported 00:31:18.401 SGL Metadata Address: Not Supported 00:31:18.401 SGL Offset: Supported 00:31:18.401 Transport SGL Data Block: Not Supported 00:31:18.401 Replay Protected Memory Block: Not Supported 00:31:18.401 00:31:18.401 Firmware Slot Information 00:31:18.401 ========================= 00:31:18.401 Active slot: 0 00:31:18.401 00:31:18.401 Asymmetric Namespace Access 00:31:18.401 =========================== 00:31:18.401 Change Count : 0 00:31:18.401 Number of ANA Group Descriptors : 1 00:31:18.401 ANA Group Descriptor : 0 00:31:18.401 ANA Group ID : 1 00:31:18.401 Number of NSID Values : 1 00:31:18.401 Change Count : 0 00:31:18.401 ANA State : 1 00:31:18.401 Namespace Identifier : 1 00:31:18.401 00:31:18.401 Commands Supported and Effects 00:31:18.401 ============================== 00:31:18.401 Admin Commands 00:31:18.401 -------------- 00:31:18.401 Get Log Page (02h): Supported 00:31:18.401 Identify (06h): Supported 00:31:18.401 Abort (08h): Supported 00:31:18.401 Set Features (09h): Supported 00:31:18.401 Get Features (0Ah): Supported 00:31:18.401 Asynchronous Event Request (0Ch): Supported 00:31:18.401 Keep Alive (18h): Supported 00:31:18.401 I/O Commands 00:31:18.401 ------------ 00:31:18.401 Flush (00h): Supported 00:31:18.401 Write (01h): Supported LBA-Change 00:31:18.401 Read (02h): Supported 00:31:18.401 Write Zeroes (08h): Supported LBA-Change 00:31:18.401 Dataset Management (09h): Supported 00:31:18.401 00:31:18.401 Error Log 00:31:18.401 ========= 00:31:18.401 Entry: 0 00:31:18.401 Error Count: 0x3 00:31:18.401 Submission Queue Id: 0x0 00:31:18.401 Command Id: 0x5 00:31:18.401 Phase Bit: 0 00:31:18.401 Status Code: 0x2 00:31:18.401 Status Code Type: 0x0 00:31:18.401 Do Not Retry: 1 00:31:18.401 Error Location: 0x28 00:31:18.401 LBA: 0x0 00:31:18.401 Namespace: 0x0 00:31:18.401 Vendor Log Page: 0x0 00:31:18.401 ----------- 00:31:18.401 Entry: 1 00:31:18.401 Error Count: 0x2 00:31:18.401 Submission Queue Id: 0x0 00:31:18.401 Command Id: 0x5 00:31:18.401 Phase Bit: 0 00:31:18.401 Status Code: 0x2 00:31:18.401 Status Code Type: 0x0 00:31:18.401 Do Not Retry: 1 00:31:18.401 Error Location: 0x28 00:31:18.401 LBA: 0x0 00:31:18.401 Namespace: 0x0 00:31:18.401 Vendor Log Page: 0x0 00:31:18.401 ----------- 00:31:18.401 Entry: 2 00:31:18.401 Error Count: 0x1 00:31:18.401 Submission Queue Id: 0x0 00:31:18.401 Command Id: 0x0 00:31:18.401 Phase Bit: 0 00:31:18.401 Status Code: 0x2 00:31:18.401 Status Code Type: 0x0 00:31:18.401 Do Not Retry: 1 00:31:18.401 Error Location: 0x28 00:31:18.401 LBA: 0x0 00:31:18.401 Namespace: 0x0 00:31:18.401 Vendor Log Page: 0x0 00:31:18.401 00:31:18.401 Number of Queues 00:31:18.401 ================ 00:31:18.401 Number of I/O Submission Queues: 128 00:31:18.401 Number of I/O Completion Queues: 128 00:31:18.401 00:31:18.401 ZNS Specific Controller Data 00:31:18.401 ============================ 00:31:18.401 Zone Append Size Limit: 0 00:31:18.401 00:31:18.401 00:31:18.401 Active Namespaces 00:31:18.401 ================= 00:31:18.401 get_feature(0x05) failed 00:31:18.401 Namespace ID:1 00:31:18.401 Command Set Identifier: NVM (00h) 00:31:18.401 Deallocate: Supported 00:31:18.401 Deallocated/Unwritten Error: Not Supported 00:31:18.401 Deallocated Read Value: Unknown 00:31:18.401 Deallocate in Write Zeroes: Not Supported 00:31:18.401 Deallocated Guard Field: 0xFFFF 00:31:18.401 Flush: Supported 00:31:18.401 Reservation: Not Supported 00:31:18.401 Namespace Sharing Capabilities: Multiple Controllers 00:31:18.401 Size (in LBAs): 4194304 (2GiB) 00:31:18.401 Capacity (in LBAs): 4194304 (2GiB) 00:31:18.401 Utilization (in LBAs): 4194304 (2GiB) 00:31:18.401 UUID: 8acab47b-d209-4e71-9528-d37df0a1cf84 00:31:18.401 Thin Provisioning: Not Supported 00:31:18.401 Per-NS Atomic Units: Yes 00:31:18.401 Atomic Boundary Size (Normal): 0 00:31:18.401 Atomic Boundary Size (PFail): 0 00:31:18.401 Atomic Boundary Offset: 0 00:31:18.401 NGUID/EUI64 Never Reused: No 00:31:18.401 ANA group ID: 1 00:31:18.401 Namespace Write Protected: No 00:31:18.401 Number of LBA Formats: 1 00:31:18.401 Current LBA Format: LBA Format #00 00:31:18.401 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:18.401 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:18.401 rmmod nvme_rdma 00:31:18.401 rmmod nvme_fabrics 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:31:18.401 01:15:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:18.401 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:18.401 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:18.401 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:18.401 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:18.401 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:31:18.401 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:18.401 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:18.401 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:18.402 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:18.402 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:18.402 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:31:18.402 01:15:25 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:31:20.945 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:31:21.605 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:21.605 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:22.588 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:22.588 00:31:22.588 real 0m15.184s 00:31:22.588 user 0m4.816s 00:31:22.588 sys 0m8.845s 00:31:22.588 01:15:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:22.588 01:15:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:22.588 ************************************ 00:31:22.588 END TEST nvmf_identify_kernel_target 00:31:22.588 ************************************ 00:31:22.588 01:15:29 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:22.588 01:15:29 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:22.588 01:15:29 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:22.588 01:15:29 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.588 ************************************ 00:31:22.588 START TEST nvmf_auth_host 00:31:22.588 ************************************ 00:31:22.588 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:22.588 * Looking for test storage... 00:31:22.588 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:31:22.588 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:22.588 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:22.588 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:22.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.862 --rc genhtml_branch_coverage=1 00:31:22.862 --rc genhtml_function_coverage=1 00:31:22.862 --rc genhtml_legend=1 00:31:22.862 --rc geninfo_all_blocks=1 00:31:22.862 --rc geninfo_unexecuted_blocks=1 00:31:22.862 00:31:22.862 ' 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:22.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.862 --rc genhtml_branch_coverage=1 00:31:22.862 --rc genhtml_function_coverage=1 00:31:22.862 --rc genhtml_legend=1 00:31:22.862 --rc geninfo_all_blocks=1 00:31:22.862 --rc geninfo_unexecuted_blocks=1 00:31:22.862 00:31:22.862 ' 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:22.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.862 --rc genhtml_branch_coverage=1 00:31:22.862 --rc genhtml_function_coverage=1 00:31:22.862 --rc genhtml_legend=1 00:31:22.862 --rc geninfo_all_blocks=1 00:31:22.862 --rc geninfo_unexecuted_blocks=1 00:31:22.862 00:31:22.862 ' 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:22.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.862 --rc genhtml_branch_coverage=1 00:31:22.862 --rc genhtml_function_coverage=1 00:31:22.862 --rc genhtml_legend=1 00:31:22.862 --rc geninfo_all_blocks=1 00:31:22.862 --rc geninfo_unexecuted_blocks=1 00:31:22.862 00:31:22.862 ' 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.862 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:22.863 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:22.863 01:15:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:28.359 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:28.359 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@405 -- # modinfo irdma 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:28.359 Found net devices under 0000:af:00.0: cvl_0_0 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:28.359 Found net devices under 0000:af:00.1: cvl_0_1 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:28.359 01:15:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:28.359 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:28.359 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:28.359 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:28.359 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:28.359 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:28.359 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:28.359 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:28.359 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:28.359 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:28.359 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo cvl_0_0 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo cvl_0_1 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:31:28.645 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:31:28.645 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:31:28.645 altname enp175s0f0np0 00:31:28.645 altname ens801f0np0 00:31:28.645 inet 192.168.100.8/24 scope global cvl_0_0 00:31:28.645 valid_lft forever preferred_lft forever 00:31:28.645 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:31:28.645 valid_lft forever preferred_lft forever 00:31:28.645 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:31:28.646 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:31:28.646 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:31:28.646 altname enp175s0f1np1 00:31:28.646 altname ens801f1np1 00:31:28.646 inet 192.168.100.9/24 scope global cvl_0_1 00:31:28.646 valid_lft forever preferred_lft forever 00:31:28.646 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:31:28.646 valid_lft forever preferred_lft forever 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo cvl_0_0 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo cvl_0_1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:28.646 192.168.100.9' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:28.646 192.168.100.9' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:28.646 192.168.100.9' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=516369 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 516369 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 516369 ']' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.646 01:15:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.584 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:29.584 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4014cb5d8d1abc85f15e7b3282e58a6b 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eRp 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4014cb5d8d1abc85f15e7b3282e58a6b 0 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4014cb5d8d1abc85f15e7b3282e58a6b 0 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4014cb5d8d1abc85f15e7b3282e58a6b 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eRp 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eRp 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.eRp 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1adf2ce83c70d65b98511fb33e8793ec84c60d6a0628a3f15ad7a38713296094 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.etZ 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1adf2ce83c70d65b98511fb33e8793ec84c60d6a0628a3f15ad7a38713296094 3 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1adf2ce83c70d65b98511fb33e8793ec84c60d6a0628a3f15ad7a38713296094 3 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1adf2ce83c70d65b98511fb33e8793ec84c60d6a0628a3f15ad7a38713296094 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.etZ 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.etZ 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.etZ 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6aa7ce8c5db4fbafc075556d7ea319043bc637671ae1e931 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dtK 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6aa7ce8c5db4fbafc075556d7ea319043bc637671ae1e931 0 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6aa7ce8c5db4fbafc075556d7ea319043bc637671ae1e931 0 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6aa7ce8c5db4fbafc075556d7ea319043bc637671ae1e931 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:29.585 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dtK 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dtK 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.dtK 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f60addff7fb0a89e08f7b71a0e191b372eac7a7b88ff7ca3 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.DXn 00:31:29.844 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f60addff7fb0a89e08f7b71a0e191b372eac7a7b88ff7ca3 2 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f60addff7fb0a89e08f7b71a0e191b372eac7a7b88ff7ca3 2 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f60addff7fb0a89e08f7b71a0e191b372eac7a7b88ff7ca3 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.DXn 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.DXn 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.DXn 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6ee90c300c5a9b12e7c2b26a09a92aca 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TBG 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6ee90c300c5a9b12e7c2b26a09a92aca 1 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6ee90c300c5a9b12e7c2b26a09a92aca 1 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6ee90c300c5a9b12e7c2b26a09a92aca 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TBG 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TBG 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.TBG 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5e9883e0e3ae767d91fefcd70ec93d2c 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rMW 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5e9883e0e3ae767d91fefcd70ec93d2c 1 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5e9883e0e3ae767d91fefcd70ec93d2c 1 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5e9883e0e3ae767d91fefcd70ec93d2c 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rMW 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rMW 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.rMW 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1b7459913ca64d0c5c2a9024ad1132b00f65a52a0221ae92 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1Sh 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1b7459913ca64d0c5c2a9024ad1132b00f65a52a0221ae92 2 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1b7459913ca64d0c5c2a9024ad1132b00f65a52a0221ae92 2 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1b7459913ca64d0c5c2a9024ad1132b00f65a52a0221ae92 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:29.845 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1Sh 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1Sh 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.1Sh 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=20f8eb310a94a735abb078173cbd4f74 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.A2Z 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 20f8eb310a94a735abb078173cbd4f74 0 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 20f8eb310a94a735abb078173cbd4f74 0 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=20f8eb310a94a735abb078173cbd4f74 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.A2Z 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.A2Z 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.A2Z 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fdc281ee9ffba29a766205345eb4b8c6164cceaca03844be4f7527afd8366678 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.NJx 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fdc281ee9ffba29a766205345eb4b8c6164cceaca03844be4f7527afd8366678 3 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fdc281ee9ffba29a766205345eb4b8c6164cceaca03844be4f7527afd8366678 3 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fdc281ee9ffba29a766205345eb4b8c6164cceaca03844be4f7527afd8366678 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.NJx 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.NJx 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.NJx 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 516369 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 516369 ']' 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.104 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.105 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.363 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:30.363 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:31:30.363 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:30.363 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eRp 00:31:30.363 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.363 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.363 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.363 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.etZ ]] 00:31:30.363 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.etZ 00:31:30.363 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.363 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.dtK 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.DXn ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DXn 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.TBG 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.rMW ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rMW 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.1Sh 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.A2Z ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.A2Z 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.NJx 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:30.364 01:15:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:31:32.895 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:31:33.153 Waiting for block devices as requested 00:31:33.153 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:33.412 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:33.412 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:33.412 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:33.671 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:33.671 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:33.671 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:33.929 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:33.929 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:33.929 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:33.929 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:34.188 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:34.188 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:34.188 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:34.188 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:34.447 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:34.447 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:35.015 No valid GPT data, bailing 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:31:35.015 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:31:35.274 No valid GPT data, bailing 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # continue 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:31:35.274 00:31:35.274 Discovery Log Number of Records 2, Generation counter 2 00:31:35.274 =====Discovery Log Entry 0====== 00:31:35.274 trtype: rdma 00:31:35.274 adrfam: ipv4 00:31:35.274 subtype: current discovery subsystem 00:31:35.274 treq: not specified, sq flow control disable supported 00:31:35.274 portid: 1 00:31:35.274 trsvcid: 4420 00:31:35.274 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:35.274 traddr: 192.168.100.8 00:31:35.274 eflags: none 00:31:35.274 rdma_prtype: not specified 00:31:35.274 rdma_qptype: connected 00:31:35.274 rdma_cms: rdma-cm 00:31:35.274 rdma_pkey: 0x0000 00:31:35.274 =====Discovery Log Entry 1====== 00:31:35.274 trtype: rdma 00:31:35.274 adrfam: ipv4 00:31:35.274 subtype: nvme subsystem 00:31:35.274 treq: not specified, sq flow control disable supported 00:31:35.274 portid: 1 00:31:35.274 trsvcid: 4420 00:31:35.274 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:35.274 traddr: 192.168.100.8 00:31:35.274 eflags: none 00:31:35.274 rdma_prtype: not specified 00:31:35.274 rdma_qptype: connected 00:31:35.274 rdma_cms: rdma-cm 00:31:35.274 rdma_pkey: 0x0000 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.274 01:15:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:35.533 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.534 nvme0n1 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.534 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.793 nvme0n1 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.793 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.052 nvme0n1 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.052 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.311 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.312 nvme0n1 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.312 01:15:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.571 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.572 nvme0n1 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.572 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.831 nvme0n1 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.831 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:37.090 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:37.091 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.091 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.350 nvme0n1 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.350 01:15:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.350 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.609 nvme0n1 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:37.609 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.610 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.869 nvme0n1 00:31:37.869 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.869 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.869 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.869 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.869 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.869 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.870 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.129 nvme0n1 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.129 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.388 nvme0n1 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.388 01:15:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:38.388 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:38.389 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:38.389 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:38.389 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.957 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.216 nvme0n1 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.216 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.217 01:15:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.476 nvme0n1 00:31:39.476 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.476 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.476 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.476 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.476 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.476 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.476 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.476 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.476 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.476 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.735 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.994 nvme0n1 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.995 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.254 nvme0n1 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.254 01:15:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.513 nvme0n1 00:31:40.513 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.513 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.513 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.513 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.513 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.513 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.513 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.513 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.513 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.513 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.772 01:15:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:42.149 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:42.149 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:42.149 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:42.149 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:42.149 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.149 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.149 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.150 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.409 nvme0n1 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.409 01:15:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.409 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.667 nvme0n1 00:31:42.667 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:42.925 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.926 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.184 nvme0n1 00:31:43.184 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.184 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.184 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.184 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.184 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.184 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:43.443 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.444 01:15:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.702 nvme0n1 00:31:43.702 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.702 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.702 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.702 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.702 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.702 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.702 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.702 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.702 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.702 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.961 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.220 nvme0n1 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:44.220 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:44.221 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.221 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.221 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:44.221 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:44.221 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:44.221 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:44.221 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:44.221 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:44.221 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.221 01:15:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.787 nvme0n1 00:31:44.787 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.787 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.787 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.787 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.787 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.787 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:45.046 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.047 01:15:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.615 nvme0n1 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.615 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.188 nvme0n1 00:31:46.188 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.188 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.188 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.188 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.189 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.447 01:15:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.015 nvme0n1 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.015 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.016 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.016 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.016 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.016 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:47.016 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:47.016 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:47.016 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:47.016 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:47.016 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:47.016 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.016 01:15:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.585 nvme0n1 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.585 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.844 nvme0n1 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:47.844 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.845 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.103 nvme0n1 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.104 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.363 nvme0n1 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:48.363 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:48.364 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:48.364 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.364 01:15:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.622 nvme0n1 00:31:48.622 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.622 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.622 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.622 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.623 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.882 nvme0n1 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.882 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.141 nvme0n1 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:49.141 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.142 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.401 nvme0n1 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.401 01:15:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.401 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.660 nvme0n1 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.660 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.919 nvme0n1 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:49.919 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.920 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.179 nvme0n1 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:50.179 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.180 01:15:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.438 nvme0n1 00:31:50.438 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.438 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.438 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.438 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.438 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.438 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.698 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.957 nvme0n1 00:31:50.957 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.957 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.957 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.957 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.957 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.957 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.957 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.957 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.957 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.957 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.957 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.958 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.217 nvme0n1 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.217 01:15:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.476 nvme0n1 00:31:51.476 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.476 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.476 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.476 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.476 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.476 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.476 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.476 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.476 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.476 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:51.735 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:51.736 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.736 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.994 nvme0n1 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.994 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.995 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.254 nvme0n1 00:31:52.254 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.254 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.254 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.254 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.254 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.513 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.513 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.513 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.513 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.513 01:15:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.513 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.770 nvme0n1 00:31:52.770 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.770 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.770 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.770 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.770 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.770 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.770 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.770 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.770 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.770 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.029 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.288 nvme0n1 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.288 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.547 01:15:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.805 nvme0n1 00:31:53.805 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.805 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.805 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.805 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.805 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.805 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.805 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.805 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.805 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.806 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.375 nvme0n1 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.375 01:16:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.943 nvme0n1 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.943 01:16:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.511 nvme0n1 00:31:55.511 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.511 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.511 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.511 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.511 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.511 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.511 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.511 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.511 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.511 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.771 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.340 nvme0n1 00:31:56.340 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.340 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.340 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.340 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.340 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.340 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.340 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.341 01:16:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.909 nvme0n1 00:31:56.909 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.909 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.909 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.909 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.910 01:16:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.478 nvme0n1 00:31:57.479 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.479 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.479 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.479 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.479 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.479 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.479 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.479 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.479 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.479 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.738 nvme0n1 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.738 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:57.997 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.998 nvme0n1 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.998 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 nvme0n1 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:58.257 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.516 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.517 01:16:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.517 nvme0n1 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.517 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.776 nvme0n1 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.776 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.777 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.036 nvme0n1 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.036 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.295 nvme0n1 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:59.295 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:31:59.296 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:31:59.296 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:31:59.296 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:59.296 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.296 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:59.296 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:59.296 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:59.296 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.296 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:59.296 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.296 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.555 01:16:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.555 nvme0n1 00:31:59.555 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.555 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.555 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.555 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.555 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.555 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.555 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.555 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.555 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.555 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.814 nvme0n1 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.814 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.073 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.074 nvme0n1 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.074 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.333 01:16:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.592 nvme0n1 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.592 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.851 nvme0n1 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.851 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:00.852 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:00.852 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:00.852 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:00.852 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:00.852 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:00.852 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.852 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.110 nvme0n1 00:32:01.110 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.110 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.110 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.110 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.110 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.110 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:32:01.369 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.370 01:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.629 nvme0n1 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.629 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.630 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.888 nvme0n1 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.889 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.457 nvme0n1 00:32:02.457 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.457 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.457 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.457 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.457 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.457 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.457 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.457 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.457 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.457 01:16:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.457 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.025 nvme0n1 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.025 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.284 nvme0n1 00:32:03.284 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.284 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.284 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.284 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.285 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.544 01:16:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.803 nvme0n1 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.803 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.371 nvme0n1 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDAxNGNiNWQ4ZDFhYmM4NWYxNWU3YjMyODJlNThhNmLD1dZS: 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: ]] 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFkZjJjZTgzYzcwZDY1Yjk4NTExZmIzM2U4NzkzZWM4NGM2MGQ2YTA2MjhhM2YxNWFkN2EzODcxMzI5NjA5NEW95Tw=: 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.371 01:16:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.939 nvme0n1 00:32:04.939 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.939 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.939 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.939 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.939 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.939 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.939 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.939 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.939 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.939 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.940 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.199 01:16:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.767 nvme0n1 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.767 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.335 nvme0n1 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWI3NDU5OTEzY2E2NGQwYzVjMmE5MDI0YWQxMTMyYjAwZjY1YTUyYTAyMjFhZTkyM4It8w==: 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: ]] 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOGViMzEwYTk0YTczNWFiYjA3ODE3M2NiZDRmNzQ8GWhr: 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.335 01:16:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.901 nvme0n1 00:32:06.901 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.901 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.901 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.901 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.901 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.901 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.901 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.901 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.901 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.901 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmRjMjgxZWU5ZmZiYTI5YTc2NjIwNTM0NWViNGI4YzYxNjRjY2VhY2EwMzg0NGJlNGY3NTI3YWZkODM2NjY3OP5Fw5o=: 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.161 01:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.728 nvme0n1 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.728 request: 00:32:07.728 { 00:32:07.728 "name": "nvme0", 00:32:07.728 "trtype": "rdma", 00:32:07.728 "traddr": "192.168.100.8", 00:32:07.728 "adrfam": "ipv4", 00:32:07.728 "trsvcid": "4420", 00:32:07.728 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:07.728 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:07.728 "prchk_reftag": false, 00:32:07.728 "prchk_guard": false, 00:32:07.728 "hdgst": false, 00:32:07.728 "ddgst": false, 00:32:07.728 "allow_unrecognized_csi": false, 00:32:07.728 "method": "bdev_nvme_attach_controller", 00:32:07.728 "req_id": 1 00:32:07.728 } 00:32:07.728 Got JSON-RPC error response 00:32:07.728 response: 00:32:07.728 { 00:32:07.728 "code": -5, 00:32:07.728 "message": "Input/output error" 00:32:07.728 } 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.728 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:07.987 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.988 request: 00:32:07.988 { 00:32:07.988 "name": "nvme0", 00:32:07.988 "trtype": "rdma", 00:32:07.988 "traddr": "192.168.100.8", 00:32:07.988 "adrfam": "ipv4", 00:32:07.988 "trsvcid": "4420", 00:32:07.988 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:07.988 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:07.988 "prchk_reftag": false, 00:32:07.988 "prchk_guard": false, 00:32:07.988 "hdgst": false, 00:32:07.988 "ddgst": false, 00:32:07.988 "dhchap_key": "key2", 00:32:07.988 "allow_unrecognized_csi": false, 00:32:07.988 "method": "bdev_nvme_attach_controller", 00:32:07.988 "req_id": 1 00:32:07.988 } 00:32:07.988 Got JSON-RPC error response 00:32:07.988 response: 00:32:07.988 { 00:32:07.988 "code": -5, 00:32:07.988 "message": "Input/output error" 00:32:07.988 } 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.988 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.247 request: 00:32:08.247 { 00:32:08.247 "name": "nvme0", 00:32:08.247 "trtype": "rdma", 00:32:08.247 "traddr": "192.168.100.8", 00:32:08.247 "adrfam": "ipv4", 00:32:08.247 "trsvcid": "4420", 00:32:08.247 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:08.247 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:08.247 "prchk_reftag": false, 00:32:08.247 "prchk_guard": false, 00:32:08.247 "hdgst": false, 00:32:08.247 "ddgst": false, 00:32:08.247 "dhchap_key": "key1", 00:32:08.247 "dhchap_ctrlr_key": "ckey2", 00:32:08.247 "allow_unrecognized_csi": false, 00:32:08.247 "method": "bdev_nvme_attach_controller", 00:32:08.247 "req_id": 1 00:32:08.247 } 00:32:08.247 Got JSON-RPC error response 00:32:08.247 response: 00:32:08.247 { 00:32:08.247 "code": -5, 00:32:08.247 "message": "Input/output error" 00:32:08.247 } 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.247 nvme0n1 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.247 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.248 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.506 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.506 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.506 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.506 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:32:08.506 01:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.506 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.507 request: 00:32:08.507 { 00:32:08.507 "name": "nvme0", 00:32:08.507 "dhchap_key": "key1", 00:32:08.507 "dhchap_ctrlr_key": "ckey2", 00:32:08.507 "method": "bdev_nvme_set_keys", 00:32:08.507 "req_id": 1 00:32:08.507 } 00:32:08.507 Got JSON-RPC error response 00:32:08.507 response: 00:32:08.507 { 00:32:08.507 "code": -13, 00:32:08.507 "message": "Permission denied" 00:32:08.507 } 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:08.507 01:16:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:09.884 01:16:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.884 01:16:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:09.884 01:16:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.884 01:16:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.884 01:16:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.884 01:16:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:09.884 01:16:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFhN2NlOGM1ZGI0ZmJhZmMwNzU1NTZkN2VhMzE5MDQzYmM2Mzc2NzFhZTFlOTMxAKa7vA==: 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: ]] 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwYWRkZmY3ZmIwYTg5ZTA4ZjdiNzFhMGUxOTFiMzcyZWFjN2E3Yjg4ZmY3Y2Ezep9mHw==: 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.820 nvme0n1 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmVlOTBjMzAwYzVhOWIxMmU3YzJiMjZhMDlhOTJhY2F5k37/: 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: ]] 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU5ODgzZTBlM2FlNzY3ZDkxZmVmY2Q3MGVjOTNkMmOvJ/hg: 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.820 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.079 request: 00:32:11.079 { 00:32:11.079 "name": "nvme0", 00:32:11.079 "dhchap_key": "key2", 00:32:11.079 "dhchap_ctrlr_key": "ckey1", 00:32:11.079 "method": "bdev_nvme_set_keys", 00:32:11.079 "req_id": 1 00:32:11.079 } 00:32:11.079 Got JSON-RPC error response 00:32:11.079 response: 00:32:11.079 { 00:32:11.079 "code": -13, 00:32:11.079 "message": "Permission denied" 00:32:11.079 } 00:32:11.079 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:11.079 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:11.079 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:11.079 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:11.079 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:11.079 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:11.079 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.079 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.079 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.079 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.079 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:11.080 01:16:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:12.015 01:16:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.015 01:16:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:12.015 01:16:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.015 01:16:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.015 01:16:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.015 01:16:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:12.015 01:16:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:12.952 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.952 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:12.952 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.952 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:13.211 rmmod nvme_rdma 00:32:13.211 rmmod nvme_fabrics 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 516369 ']' 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 516369 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 516369 ']' 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 516369 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 516369 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 516369' 00:32:13.211 killing process with pid 516369 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 516369 00:32:13.211 01:16:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 516369 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:32:14.148 01:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:32:16.686 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:17.253 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:17.253 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:18.191 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:32:18.191 01:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eRp /tmp/spdk.key-null.dtK /tmp/spdk.key-sha256.TBG /tmp/spdk.key-sha384.1Sh /tmp/spdk.key-sha512.NJx /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvme-auth.log 00:32:18.191 01:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:32:20.726 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:20.984 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:20.984 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:20.984 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:20.985 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:21.243 00:32:21.243 real 0m58.641s 00:32:21.243 user 0m55.018s 00:32:21.243 sys 0m13.500s 00:32:21.243 01:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.243 01:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.243 ************************************ 00:32:21.243 END TEST nvmf_auth_host 00:32:21.243 ************************************ 00:32:21.243 01:16:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:32:21.243 01:16:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:21.243 01:16:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:21.243 01:16:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:21.243 01:16:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:32:21.243 01:16:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:21.243 01:16:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.243 01:16:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.243 ************************************ 00:32:21.243 START TEST nvmf_bdevperf 00:32:21.243 ************************************ 00:32:21.243 01:16:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:32:21.503 * Looking for test storage... 00:32:21.503 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:32:21.503 01:16:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:21.503 01:16:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:32:21.503 01:16:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:21.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.503 --rc genhtml_branch_coverage=1 00:32:21.503 --rc genhtml_function_coverage=1 00:32:21.503 --rc genhtml_legend=1 00:32:21.503 --rc geninfo_all_blocks=1 00:32:21.503 --rc geninfo_unexecuted_blocks=1 00:32:21.503 00:32:21.503 ' 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:21.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.503 --rc genhtml_branch_coverage=1 00:32:21.503 --rc genhtml_function_coverage=1 00:32:21.503 --rc genhtml_legend=1 00:32:21.503 --rc geninfo_all_blocks=1 00:32:21.503 --rc geninfo_unexecuted_blocks=1 00:32:21.503 00:32:21.503 ' 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:21.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.503 --rc genhtml_branch_coverage=1 00:32:21.503 --rc genhtml_function_coverage=1 00:32:21.503 --rc genhtml_legend=1 00:32:21.503 --rc geninfo_all_blocks=1 00:32:21.503 --rc geninfo_unexecuted_blocks=1 00:32:21.503 00:32:21.503 ' 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:21.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.503 --rc genhtml_branch_coverage=1 00:32:21.503 --rc genhtml_function_coverage=1 00:32:21.503 --rc genhtml_legend=1 00:32:21.503 --rc geninfo_all_blocks=1 00:32:21.503 --rc geninfo_unexecuted_blocks=1 00:32:21.503 00:32:21.503 ' 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.503 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:21.504 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:21.504 01:16:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.075 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:28.076 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:28.076 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@405 -- # modinfo irdma 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:28.076 Found net devices under 0000:af:00.0: cvl_0_0 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:28.076 Found net devices under 0000:af:00.1: cvl_0_1 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo cvl_0_0 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo cvl_0_1 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:32:28.076 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:32:28.076 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:32:28.076 altname enp175s0f0np0 00:32:28.076 altname ens801f0np0 00:32:28.076 inet 192.168.100.8/24 scope global cvl_0_0 00:32:28.076 valid_lft forever preferred_lft forever 00:32:28.076 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:32:28.076 valid_lft forever preferred_lft forever 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:32:28.076 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:32:28.077 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:32:28.077 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:32:28.077 altname enp175s0f1np1 00:32:28.077 altname ens801f1np1 00:32:28.077 inet 192.168.100.9/24 scope global cvl_0_1 00:32:28.077 valid_lft forever preferred_lft forever 00:32:28.077 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:32:28.077 valid_lft forever preferred_lft forever 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo cvl_0_0 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo cvl_0_1 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:32:28.077 192.168.100.9' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:32:28.077 192.168.100.9' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:32:28.077 192.168.100.9' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=530826 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 530826 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 530826 ']' 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.077 01:16:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:28.077 [2024-11-19 01:16:33.993568] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:32:28.077 [2024-11-19 01:16:33.993661] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.077 [2024-11-19 01:16:34.119383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:28.077 [2024-11-19 01:16:34.227108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.077 [2024-11-19 01:16:34.227153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.077 [2024-11-19 01:16:34.227163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.077 [2024-11-19 01:16:34.227173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.077 [2024-11-19 01:16:34.227180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.077 [2024-11-19 01:16:34.229445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:28.077 [2024-11-19 01:16:34.229514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.077 [2024-11-19 01:16:34.229535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:28.336 [2024-11-19 01:16:34.862050] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000028fc0/0x617000007c40) succeed. 00:32:28.336 [2024-11-19 01:16:34.871487] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029140/0x617000007fc0) succeed. 00:32:28.336 [2024-11-19 01:16:34.871515] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:28.336 Malloc0 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:28.336 [2024-11-19 01:16:34.989212] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:28.336 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:28.336 { 00:32:28.336 "params": { 00:32:28.336 "name": "Nvme$subsystem", 00:32:28.336 "trtype": "$TEST_TRANSPORT", 00:32:28.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:28.337 "adrfam": "ipv4", 00:32:28.337 "trsvcid": "$NVMF_PORT", 00:32:28.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:28.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:28.337 "hdgst": ${hdgst:-false}, 00:32:28.337 "ddgst": ${ddgst:-false} 00:32:28.337 }, 00:32:28.337 "method": "bdev_nvme_attach_controller" 00:32:28.337 } 00:32:28.337 EOF 00:32:28.337 )") 00:32:28.337 01:16:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:32:28.337 01:16:35 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:32:28.337 01:16:35 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:32:28.337 01:16:35 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:28.337 "params": { 00:32:28.337 "name": "Nvme1", 00:32:28.337 "trtype": "rdma", 00:32:28.337 "traddr": "192.168.100.8", 00:32:28.337 "adrfam": "ipv4", 00:32:28.337 "trsvcid": "4420", 00:32:28.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:28.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:28.337 "hdgst": false, 00:32:28.337 "ddgst": false 00:32:28.337 }, 00:32:28.337 "method": "bdev_nvme_attach_controller" 00:32:28.337 }' 00:32:28.595 [2024-11-19 01:16:35.069083] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:32:28.596 [2024-11-19 01:16:35.069171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530917 ] 00:32:28.596 [2024-11-19 01:16:35.195863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.854 [2024-11-19 01:16:35.316630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.127 Running I/O for 1 seconds... 00:32:30.062 15360.00 IOPS, 60.00 MiB/s 00:32:30.062 Latency(us) 00:32:30.062 [2024-11-19T00:16:36.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.062 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:30.062 Verification LBA range: start 0x0 length 0x4000 00:32:30.062 Nvme1n1 : 1.01 15382.09 60.09 0.00 0.00 8276.18 2512.21 19598.38 00:32:30.062 [2024-11-19T00:16:36.755Z] =================================================================================================================== 00:32:30.062 [2024-11-19T00:16:36.755Z] Total : 15382.09 60.09 0.00 0.00 8276.18 2512.21 19598.38 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=531308 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:30.997 { 00:32:30.997 "params": { 00:32:30.997 "name": "Nvme$subsystem", 00:32:30.997 "trtype": "$TEST_TRANSPORT", 00:32:30.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:30.997 "adrfam": "ipv4", 00:32:30.997 "trsvcid": "$NVMF_PORT", 00:32:30.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:30.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:30.997 "hdgst": ${hdgst:-false}, 00:32:30.997 "ddgst": ${ddgst:-false} 00:32:30.997 }, 00:32:30.997 "method": "bdev_nvme_attach_controller" 00:32:30.997 } 00:32:30.997 EOF 00:32:30.997 )") 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:32:30.997 01:16:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:30.997 "params": { 00:32:30.997 "name": "Nvme1", 00:32:30.997 "trtype": "rdma", 00:32:30.997 "traddr": "192.168.100.8", 00:32:30.997 "adrfam": "ipv4", 00:32:30.997 "trsvcid": "4420", 00:32:30.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:30.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:30.997 "hdgst": false, 00:32:30.997 "ddgst": false 00:32:30.997 }, 00:32:30.997 "method": "bdev_nvme_attach_controller" 00:32:30.997 }' 00:32:31.256 [2024-11-19 01:16:37.736133] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:32:31.256 [2024-11-19 01:16:37.736216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531308 ] 00:32:31.256 [2024-11-19 01:16:37.863544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.514 [2024-11-19 01:16:37.982851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.772 Running I/O for 15 seconds... 00:32:34.080 15460.00 IOPS, 60.39 MiB/s [2024-11-19T00:16:40.773Z] 15552.00 IOPS, 60.75 MiB/s [2024-11-19T00:16:40.773Z] 01:16:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 530826 00:32:34.080 01:16:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:34.651 [2024-11-19 01:16:41.236331] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:34.651 [2024-11-19 01:16:41.236392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.651 [2024-11-19 01:16:41.236809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.651 [2024-11-19 01:16:41.236818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.236829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.652 [2024-11-19 01:16:41.236837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.236848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.652 [2024-11-19 01:16:41.236856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.236867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.652 [2024-11-19 01:16:41.236876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.236886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.652 [2024-11-19 01:16:41.236894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.236904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.652 [2024-11-19 01:16:41.236912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.236928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.652 [2024-11-19 01:16:41.236937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.236947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.652 [2024-11-19 01:16:41.236956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.236968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.236977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.236988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.236997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0xada0f368 00:32:34.652 [2024-11-19 01:16:41.237396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.652 [2024-11-19 01:16:41.237406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.237982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.237994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.238004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.653 [2024-11-19 01:16:41.238017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0xada0f368 00:32:34.653 [2024-11-19 01:16:41.238026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0xada0f368 00:32:34.654 [2024-11-19 01:16:41.238543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.654 [2024-11-19 01:16:41.238554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.238981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.238993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.239002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.247303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.247315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.247344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0xada0f368 00:32:34.655 [2024-11-19 01:16:41.247354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.247895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:34.655 [2024-11-19 01:16:41.247913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:34.655 [2024-11-19 01:16:41.247925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:8 PRP1 0x0 PRP2 0x0 00:32:34.655 [2024-11-19 01:16:41.247936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.655 [2024-11-19 01:16:41.248132] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:34.655 [2024-11-19 01:16:41.248149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.655 [2024-11-19 01:16:41.248160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.656 [2024-11-19 01:16:41.248171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.656 [2024-11-19 01:16:41.248181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.656 [2024-11-19 01:16:41.248191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.656 [2024-11-19 01:16:41.248199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.656 [2024-11-19 01:16:41.248209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.656 [2024-11-19 01:16:41.248218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.656 [2024-11-19 01:16:41.282538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:32:34.656 [2024-11-19 01:16:41.282569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:34.656 [2024-11-19 01:16:41.282581] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:32:34.656 [2024-11-19 01:16:41.285547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:34.656 [2024-11-19 01:16:41.288941] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:34.656 [2024-11-19 01:16:41.288965] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:34.656 [2024-11-19 01:16:41.288974] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:32:35.851 11605.33 IOPS, 45.33 MiB/s [2024-11-19T00:16:42.544Z] [2024-11-19 01:16:42.291930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:32:35.851 [2024-11-19 01:16:42.291998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:35.851 [2024-11-19 01:16:42.292510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:35.851 [2024-11-19 01:16:42.292522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:35.851 [2024-11-19 01:16:42.292534] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:32:35.851 [2024-11-19 01:16:42.292547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:35.851 [2024-11-19 01:16:42.300798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:35.851 [2024-11-19 01:16:42.304076] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.851 [2024-11-19 01:16:42.304098] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.851 [2024-11-19 01:16:42.304108] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:32:36.788 8704.00 IOPS, 34.00 MiB/s [2024-11-19T00:16:43.481Z] [2024-11-19 01:16:43.307045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:32:36.788 [2024-11-19 01:16:43.307110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:36.788 [2024-11-19 01:16:43.307659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:36.788 [2024-11-19 01:16:43.307672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:36.788 [2024-11-19 01:16:43.307682] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:32:36.788 [2024-11-19 01:16:43.307694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:36.788 [2024-11-19 01:16:43.314788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:36.788 [2024-11-19 01:16:43.317988] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:36.788 [2024-11-19 01:16:43.318010] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:36.788 [2024-11-19 01:16:43.318019] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:32:37.046 6963.20 IOPS, 27.20 MiB/s [2024-11-19T00:16:43.739Z] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 530826 Killed "${NVMF_APP[@]}" "$@" 00:32:37.046 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:37.046 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:37.046 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:37.046 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:37.047 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.047 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=532320 00:32:37.047 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:37.047 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 532320 00:32:37.047 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 532320 ']' 00:32:37.047 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.047 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.047 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.047 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.047 01:16:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.305 [2024-11-19 01:16:43.759972] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:32:37.305 [2024-11-19 01:16:43.760065] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.305 [2024-11-19 01:16:43.891461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:37.564 [2024-11-19 01:16:44.003469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:37.564 [2024-11-19 01:16:44.003512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:37.564 [2024-11-19 01:16:44.003523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:37.564 [2024-11-19 01:16:44.003549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:37.564 [2024-11-19 01:16:44.003557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:37.564 [2024-11-19 01:16:44.005852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:37.564 [2024-11-19 01:16:44.005922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.564 [2024-11-19 01:16:44.005944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:37.823 [2024-11-19 01:16:44.320991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:32:37.823 [2024-11-19 01:16:44.321036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:37.823 [2024-11-19 01:16:44.321237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:37.823 [2024-11-19 01:16:44.321250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:37.823 [2024-11-19 01:16:44.321261] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:32:37.823 [2024-11-19 01:16:44.321277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:37.823 [2024-11-19 01:16:44.330044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:37.823 [2024-11-19 01:16:44.333578] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:37.823 [2024-11-19 01:16:44.333602] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:37.823 [2024-11-19 01:16:44.333612] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:32:38.082 5802.67 IOPS, 22.67 MiB/s [2024-11-19T00:16:44.775Z] 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:38.082 [2024-11-19 01:16:44.625684] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x612000028fc0/0x617000007c40) succeed. 00:32:38.082 [2024-11-19 01:16:44.635097] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029140/0x617000007fc0) succeed. 00:32:38.082 [2024-11-19 01:16:44.635126] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:38.082 Malloc0 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:38.082 [2024-11-19 01:16:44.766391] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.082 01:16:44 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 531308 00:32:38.648 [2024-11-19 01:16:45.336623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:32:38.649 [2024-11-19 01:16:45.336660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:38.649 [2024-11-19 01:16:45.336859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:38.649 [2024-11-19 01:16:45.336872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:38.649 [2024-11-19 01:16:45.336883] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:32:38.649 [2024-11-19 01:16:45.336898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:38.907 [2024-11-19 01:16:45.348535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:38.907 [2024-11-19 01:16:45.389043] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:32:39.845 5085.71 IOPS, 19.87 MiB/s [2024-11-19T00:16:47.474Z] 6400.12 IOPS, 25.00 MiB/s [2024-11-19T00:16:48.850Z] 7426.22 IOPS, 29.01 MiB/s [2024-11-19T00:16:49.786Z] 8244.60 IOPS, 32.21 MiB/s [2024-11-19T00:16:50.721Z] 8914.45 IOPS, 34.82 MiB/s [2024-11-19T00:16:51.657Z] 9471.33 IOPS, 37.00 MiB/s [2024-11-19T00:16:52.594Z] 9944.23 IOPS, 38.84 MiB/s [2024-11-19T00:16:53.530Z] 10349.29 IOPS, 40.43 MiB/s [2024-11-19T00:16:53.530Z] 10700.47 IOPS, 41.80 MiB/s 00:32:46.837 Latency(us) 00:32:46.837 [2024-11-19T00:16:53.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.837 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:46.837 Verification LBA range: start 0x0 length 0x4000 00:32:46.837 Nvme1n1 : 15.01 10702.78 41.81 12420.78 0.00 5513.24 620.25 615164.59 00:32:46.837 [2024-11-19T00:16:53.530Z] =================================================================================================================== 00:32:46.837 [2024-11-19T00:16:53.530Z] Total : 10702.78 41.81 12420.78 0.00 5513.24 620.25 615164.59 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.776 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:47.776 rmmod nvme_rdma 00:32:48.035 rmmod nvme_fabrics 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 532320 ']' 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 532320 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 532320 ']' 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 532320 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 532320 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 532320' 00:32:48.035 killing process with pid 532320 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 532320 00:32:48.035 01:16:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 532320 00:32:49.413 01:16:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.413 01:16:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:32:49.413 00:32:49.413 real 0m28.026s 00:32:49.413 user 1m15.454s 00:32:49.413 sys 0m5.796s 00:32:49.413 01:16:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.413 01:16:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:49.413 ************************************ 00:32:49.413 END TEST nvmf_bdevperf 00:32:49.413 ************************************ 00:32:49.413 01:16:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:32:49.413 01:16:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:49.413 01:16:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.413 01:16:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.413 ************************************ 00:32:49.413 START TEST nvmf_target_disconnect 00:32:49.413 ************************************ 00:32:49.413 01:16:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:32:49.413 * Looking for test storage... 00:32:49.413 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:32:49.413 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:49.413 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:32:49.413 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:49.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.673 --rc genhtml_branch_coverage=1 00:32:49.673 --rc genhtml_function_coverage=1 00:32:49.673 --rc genhtml_legend=1 00:32:49.673 --rc geninfo_all_blocks=1 00:32:49.673 --rc geninfo_unexecuted_blocks=1 00:32:49.673 00:32:49.673 ' 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:49.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.673 --rc genhtml_branch_coverage=1 00:32:49.673 --rc genhtml_function_coverage=1 00:32:49.673 --rc genhtml_legend=1 00:32:49.673 --rc geninfo_all_blocks=1 00:32:49.673 --rc geninfo_unexecuted_blocks=1 00:32:49.673 00:32:49.673 ' 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:49.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.673 --rc genhtml_branch_coverage=1 00:32:49.673 --rc genhtml_function_coverage=1 00:32:49.673 --rc genhtml_legend=1 00:32:49.673 --rc geninfo_all_blocks=1 00:32:49.673 --rc geninfo_unexecuted_blocks=1 00:32:49.673 00:32:49.673 ' 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:49.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.673 --rc genhtml_branch_coverage=1 00:32:49.673 --rc genhtml_function_coverage=1 00:32:49.673 --rc genhtml_legend=1 00:32:49.673 --rc geninfo_all_blocks=1 00:32:49.673 --rc geninfo_unexecuted_blocks=1 00:32:49.673 00:32:49.673 ' 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.673 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:49.674 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.674 01:16:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:32:56.243 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:56.244 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:56.244 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@405 -- # modinfo irdma 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:56.244 Found net devices under 0000:af:00.0: cvl_0_0 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:56.244 Found net devices under 0000:af:00.1: cvl_0_1 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:56.244 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_0 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:32:56.245 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:32:56.245 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:32:56.245 altname enp175s0f0np0 00:32:56.245 altname ens801f0np0 00:32:56.245 inet 192.168.100.8/24 scope global cvl_0_0 00:32:56.245 valid_lft forever preferred_lft forever 00:32:56.245 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:32:56.245 valid_lft forever preferred_lft forever 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:32:56.245 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:32:56.245 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:32:56.245 altname enp175s0f1np1 00:32:56.245 altname ens801f1np1 00:32:56.245 inet 192.168.100.9/24 scope global cvl_0_1 00:32:56.245 valid_lft forever preferred_lft forever 00:32:56.245 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:32:56.245 valid_lft forever preferred_lft forever 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_0 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo cvl_0_1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:32:56.245 192.168.100.9' 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:32:56.245 192.168.100.9' 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:32:56.245 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:32:56.246 192.168.100.9' 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:56.246 ************************************ 00:32:56.246 START TEST nvmf_target_disconnect_tc1 00:32:56.246 ************************************ 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect ]] 00:32:56.246 01:17:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:56.246 [2024-11-19 01:17:02.148557] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:56.246 [2024-11-19 01:17:02.148618] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:56.246 [2024-11-19 01:17:02.148631] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d6ec0 00:32:56.505 [2024-11-19 01:17:03.151620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:32:56.505 [2024-11-19 01:17:03.151668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:32:56.505 [2024-11-19 01:17:03.151685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:32:56.505 [2024-11-19 01:17:03.151742] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:56.505 [2024-11-19 01:17:03.151756] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:32:56.505 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:32:56.506 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:56.764 Initializing NVMe Controllers 00:32:56.764 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:32:56.764 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:56.764 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:56.764 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:56.764 00:32:56.764 real 0m1.297s 00:32:56.764 user 0m0.979s 00:32:56.764 sys 0m0.307s 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:56.765 ************************************ 00:32:56.765 END TEST nvmf_target_disconnect_tc1 00:32:56.765 ************************************ 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:56.765 ************************************ 00:32:56.765 START TEST nvmf_target_disconnect_tc2 00:32:56.765 ************************************ 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=537554 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 537554 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 537554 ']' 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.765 01:17:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:56.765 [2024-11-19 01:17:03.421201] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:32:56.765 [2024-11-19 01:17:03.421284] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.023 [2024-11-19 01:17:03.542622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:57.023 [2024-11-19 01:17:03.655184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.023 [2024-11-19 01:17:03.655232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.023 [2024-11-19 01:17:03.655243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.023 [2024-11-19 01:17:03.655271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.023 [2024-11-19 01:17:03.655280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.023 [2024-11-19 01:17:03.657561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:57.023 [2024-11-19 01:17:03.657651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:57.023 [2024-11-19 01:17:03.657662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:57.023 [2024-11-19 01:17:03.657689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:57.589 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.589 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:57.589 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:57.589 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.589 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.589 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.589 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:57.589 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.589 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.848 Malloc0 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.848 [2024-11-19 01:17:04.397054] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000298c0/0x617000007c40) succeed. 00:32:57.848 [2024-11-19 01:17:04.406937] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029a40/0x617000007fc0) succeed. 00:32:57.848 [2024-11-19 01:17:04.406965] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.848 [2024-11-19 01:17:04.439354] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=537797 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:57.848 01:17:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:00.377 01:17:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 537554 00:33:00.377 01:17:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:00.636 [2024-11-19 01:17:07.283330] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Read completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 Write completed with error (sct=0, sc=8) 00:33:00.636 starting I/O failed 00:33:00.636 [2024-11-19 01:17:07.284476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:00.636 [2024-11-19 01:17:07.286627] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:00.636 [2024-11-19 01:17:07.286650] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:00.636 [2024-11-19 01:17:07.286662] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:02.010 [2024-11-19 01:17:08.289588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:02.010 qpair failed and we were unable to recover it. 00:33:02.010 [2024-11-19 01:17:08.291543] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:02.010 [2024-11-19 01:17:08.291570] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:02.010 [2024-11-19 01:17:08.291582] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:02.011 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 537554 Killed "${NVMF_APP[@]}" "$@" 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=538478 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 538478 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 538478 ']' 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:02.011 01:17:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.011 [2024-11-19 01:17:08.547707] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:02.011 [2024-11-19 01:17:08.547816] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:02.011 [2024-11-19 01:17:08.677347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:02.269 [2024-11-19 01:17:08.785861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.269 [2024-11-19 01:17:08.785908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.269 [2024-11-19 01:17:08.785919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.269 [2024-11-19 01:17:08.785930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.269 [2024-11-19 01:17:08.785938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.269 [2024-11-19 01:17:08.788167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:02.269 [2024-11-19 01:17:08.788329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:02.269 [2024-11-19 01:17:08.788247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:02.269 [2024-11-19 01:17:08.788349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:02.836 [2024-11-19 01:17:09.294498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:02.836 qpair failed and we were unable to recover it. 00:33:02.836 [2024-11-19 01:17:09.296506] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:02.836 [2024-11-19 01:17:09.296529] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:02.836 [2024-11-19 01:17:09.296543] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.836 Malloc0 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.836 [2024-11-19 01:17:09.499449] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000298c0/0x617000007c40) succeed. 00:33:02.836 [2024-11-19 01:17:09.509332] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029a40/0x617000007fc0) succeed. 00:33:02.836 [2024-11-19 01:17:09.509360] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.836 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.094 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.094 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:03.094 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.094 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.094 [2024-11-19 01:17:09.541739] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:03.094 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.094 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:03.094 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.094 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.094 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.094 01:17:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 537797 00:33:03.662 [2024-11-19 01:17:10.299455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.662 qpair failed and we were unable to recover it. 00:33:03.662 [2024-11-19 01:17:10.306206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.662 [2024-11-19 01:17:10.306316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.662 [2024-11-19 01:17:10.306343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.662 [2024-11-19 01:17:10.306360] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.662 [2024-11-19 01:17:10.306374] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.662 [2024-11-19 01:17:10.313644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.662 qpair failed and we were unable to recover it. 00:33:03.662 [2024-11-19 01:17:10.325976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.662 [2024-11-19 01:17:10.326049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.662 [2024-11-19 01:17:10.326075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.662 [2024-11-19 01:17:10.326087] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.662 [2024-11-19 01:17:10.326099] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.662 [2024-11-19 01:17:10.333591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.662 qpair failed and we were unable to recover it. 00:33:03.662 [2024-11-19 01:17:10.346096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.662 [2024-11-19 01:17:10.346178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.662 [2024-11-19 01:17:10.346201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.662 [2024-11-19 01:17:10.346214] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.662 [2024-11-19 01:17:10.346224] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.662 [2024-11-19 01:17:10.353604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.662 qpair failed and we were unable to recover it. 00:33:03.922 [2024-11-19 01:17:10.366016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.922 [2024-11-19 01:17:10.366089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.922 [2024-11-19 01:17:10.366114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.922 [2024-11-19 01:17:10.366129] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.922 [2024-11-19 01:17:10.366140] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.922 [2024-11-19 01:17:10.373693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.922 qpair failed and we were unable to recover it. 00:33:03.922 [2024-11-19 01:17:10.386376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.922 [2024-11-19 01:17:10.386456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.922 [2024-11-19 01:17:10.386478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.922 [2024-11-19 01:17:10.386494] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.922 [2024-11-19 01:17:10.386503] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.922 [2024-11-19 01:17:10.393754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.922 qpair failed and we were unable to recover it. 00:33:03.922 [2024-11-19 01:17:10.406312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.922 [2024-11-19 01:17:10.406378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.922 [2024-11-19 01:17:10.406403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.922 [2024-11-19 01:17:10.406415] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.922 [2024-11-19 01:17:10.406426] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.922 [2024-11-19 01:17:10.413786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.922 qpair failed and we were unable to recover it. 00:33:03.922 [2024-11-19 01:17:10.426312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.922 [2024-11-19 01:17:10.426378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.922 [2024-11-19 01:17:10.426401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.922 [2024-11-19 01:17:10.426416] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.922 [2024-11-19 01:17:10.426425] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.922 [2024-11-19 01:17:10.433826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.922 qpair failed and we were unable to recover it. 00:33:03.922 [2024-11-19 01:17:10.446221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.922 [2024-11-19 01:17:10.446289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.922 [2024-11-19 01:17:10.446319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.922 [2024-11-19 01:17:10.446331] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.922 [2024-11-19 01:17:10.446342] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.922 [2024-11-19 01:17:10.453943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.922 qpair failed and we were unable to recover it. 00:33:03.922 [2024-11-19 01:17:10.466325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.922 [2024-11-19 01:17:10.466395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.922 [2024-11-19 01:17:10.466417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.922 [2024-11-19 01:17:10.466430] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.922 [2024-11-19 01:17:10.466439] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.922 [2024-11-19 01:17:10.474026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.922 qpair failed and we were unable to recover it. 00:33:03.922 [2024-11-19 01:17:10.486515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.922 [2024-11-19 01:17:10.486577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.922 [2024-11-19 01:17:10.486602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.922 [2024-11-19 01:17:10.486614] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.922 [2024-11-19 01:17:10.486625] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.922 [2024-11-19 01:17:10.493991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.922 qpair failed and we were unable to recover it. 00:33:03.922 [2024-11-19 01:17:10.506524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.922 [2024-11-19 01:17:10.506594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.922 [2024-11-19 01:17:10.506617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.922 [2024-11-19 01:17:10.506631] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.922 [2024-11-19 01:17:10.506641] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.922 [2024-11-19 01:17:10.515172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.922 qpair failed and we were unable to recover it. 00:33:03.922 [2024-11-19 01:17:10.526573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.922 [2024-11-19 01:17:10.526638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.922 [2024-11-19 01:17:10.526665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.922 [2024-11-19 01:17:10.526677] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.922 [2024-11-19 01:17:10.526688] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.922 [2024-11-19 01:17:10.534174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.922 qpair failed and we were unable to recover it. 00:33:03.922 [2024-11-19 01:17:10.546480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.922 [2024-11-19 01:17:10.546551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.922 [2024-11-19 01:17:10.546573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.922 [2024-11-19 01:17:10.546586] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.922 [2024-11-19 01:17:10.546596] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.922 [2024-11-19 01:17:10.554246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.922 qpair failed and we were unable to recover it. 00:33:03.922 [2024-11-19 01:17:10.566758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.922 [2024-11-19 01:17:10.566826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.922 [2024-11-19 01:17:10.566851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.922 [2024-11-19 01:17:10.566862] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.922 [2024-11-19 01:17:10.566875] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.923 [2024-11-19 01:17:10.574320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.923 qpair failed and we were unable to recover it. 00:33:03.923 [2024-11-19 01:17:10.586678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.923 [2024-11-19 01:17:10.586749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.923 [2024-11-19 01:17:10.586771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.923 [2024-11-19 01:17:10.586785] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.923 [2024-11-19 01:17:10.586794] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:03.923 [2024-11-19 01:17:10.594394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:03.923 qpair failed and we were unable to recover it. 00:33:03.923 [2024-11-19 01:17:10.606720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:03.923 [2024-11-19 01:17:10.606784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:03.923 [2024-11-19 01:17:10.606809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:03.923 [2024-11-19 01:17:10.606821] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:03.923 [2024-11-19 01:17:10.606832] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.182 [2024-11-19 01:17:10.614403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.182 qpair failed and we were unable to recover it. 00:33:04.182 [2024-11-19 01:17:10.626692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.182 [2024-11-19 01:17:10.626771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.182 [2024-11-19 01:17:10.626796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.182 [2024-11-19 01:17:10.626810] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.182 [2024-11-19 01:17:10.626819] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.182 [2024-11-19 01:17:10.634478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.182 qpair failed and we were unable to recover it. 00:33:04.182 [2024-11-19 01:17:10.646723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.182 [2024-11-19 01:17:10.646788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.182 [2024-11-19 01:17:10.646814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.182 [2024-11-19 01:17:10.646826] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.182 [2024-11-19 01:17:10.646837] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.182 [2024-11-19 01:17:10.654488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.182 qpair failed and we were unable to recover it. 00:33:04.182 [2024-11-19 01:17:10.666800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.182 [2024-11-19 01:17:10.666877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.182 [2024-11-19 01:17:10.666901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.182 [2024-11-19 01:17:10.666916] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.182 [2024-11-19 01:17:10.666925] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.182 [2024-11-19 01:17:10.674561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.182 qpair failed and we were unable to recover it. 00:33:04.182 [2024-11-19 01:17:10.687092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.182 [2024-11-19 01:17:10.687162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.182 [2024-11-19 01:17:10.687187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.182 [2024-11-19 01:17:10.687198] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.182 [2024-11-19 01:17:10.687209] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.182 [2024-11-19 01:17:10.694624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.182 qpair failed and we were unable to recover it. 00:33:04.182 [2024-11-19 01:17:10.707100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.182 [2024-11-19 01:17:10.707165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.182 [2024-11-19 01:17:10.707187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.182 [2024-11-19 01:17:10.707206] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.182 [2024-11-19 01:17:10.707215] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.182 [2024-11-19 01:17:10.714740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.182 qpair failed and we were unable to recover it. 00:33:04.182 [2024-11-19 01:17:10.727239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.183 [2024-11-19 01:17:10.727308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.183 [2024-11-19 01:17:10.727333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.183 [2024-11-19 01:17:10.727344] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.183 [2024-11-19 01:17:10.727355] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.183 [2024-11-19 01:17:10.734785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.183 qpair failed and we were unable to recover it. 00:33:04.183 [2024-11-19 01:17:10.747187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.183 [2024-11-19 01:17:10.747253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.183 [2024-11-19 01:17:10.747276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.183 [2024-11-19 01:17:10.747290] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.183 [2024-11-19 01:17:10.747306] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.183 [2024-11-19 01:17:10.754835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.183 qpair failed and we were unable to recover it. 00:33:04.183 [2024-11-19 01:17:10.767127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.183 [2024-11-19 01:17:10.767192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.183 [2024-11-19 01:17:10.767216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.183 [2024-11-19 01:17:10.767228] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.183 [2024-11-19 01:17:10.767239] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.183 [2024-11-19 01:17:10.775327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.183 qpair failed and we were unable to recover it. 00:33:04.183 [2024-11-19 01:17:10.787266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.183 [2024-11-19 01:17:10.787338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.183 [2024-11-19 01:17:10.787360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.183 [2024-11-19 01:17:10.787373] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.183 [2024-11-19 01:17:10.787383] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.183 [2024-11-19 01:17:10.794912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.183 qpair failed and we were unable to recover it. 00:33:04.183 [2024-11-19 01:17:10.807423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.183 [2024-11-19 01:17:10.807484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.183 [2024-11-19 01:17:10.807509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.183 [2024-11-19 01:17:10.807520] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.183 [2024-11-19 01:17:10.807530] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.183 [2024-11-19 01:17:10.814978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.183 qpair failed and we were unable to recover it. 00:33:04.183 [2024-11-19 01:17:10.827279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.183 [2024-11-19 01:17:10.827358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.183 [2024-11-19 01:17:10.827380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.183 [2024-11-19 01:17:10.827393] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.183 [2024-11-19 01:17:10.827403] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.183 [2024-11-19 01:17:10.835003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.183 qpair failed and we were unable to recover it. 00:33:04.183 [2024-11-19 01:17:10.847365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.183 [2024-11-19 01:17:10.847428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.183 [2024-11-19 01:17:10.847455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.183 [2024-11-19 01:17:10.847468] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.183 [2024-11-19 01:17:10.847478] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.183 [2024-11-19 01:17:10.855038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.183 qpair failed and we were unable to recover it. 00:33:04.183 [2024-11-19 01:17:10.867452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.183 [2024-11-19 01:17:10.867523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.183 [2024-11-19 01:17:10.867546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.183 [2024-11-19 01:17:10.867559] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.183 [2024-11-19 01:17:10.867569] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.443 [2024-11-19 01:17:10.875216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-11-19 01:17:10.887547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.443 [2024-11-19 01:17:10.887616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.443 [2024-11-19 01:17:10.887641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.443 [2024-11-19 01:17:10.887653] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.443 [2024-11-19 01:17:10.887670] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.443 [2024-11-19 01:17:10.895267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-11-19 01:17:10.907623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.443 [2024-11-19 01:17:10.907698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.443 [2024-11-19 01:17:10.907722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.443 [2024-11-19 01:17:10.907735] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.443 [2024-11-19 01:17:10.907744] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.443 [2024-11-19 01:17:10.915364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-11-19 01:17:10.927609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.443 [2024-11-19 01:17:10.927674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.443 [2024-11-19 01:17:10.927699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.443 [2024-11-19 01:17:10.927711] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.443 [2024-11-19 01:17:10.927723] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.443 [2024-11-19 01:17:10.935378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-11-19 01:17:10.947742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.443 [2024-11-19 01:17:10.947816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.443 [2024-11-19 01:17:10.947839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.443 [2024-11-19 01:17:10.947852] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.443 [2024-11-19 01:17:10.947861] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.443 [2024-11-19 01:17:10.955484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-11-19 01:17:10.967938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.443 [2024-11-19 01:17:10.968004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.443 [2024-11-19 01:17:10.968032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.443 [2024-11-19 01:17:10.968044] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.443 [2024-11-19 01:17:10.968058] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.443 [2024-11-19 01:17:10.975488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-11-19 01:17:10.987967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.443 [2024-11-19 01:17:10.988042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.443 [2024-11-19 01:17:10.988065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.443 [2024-11-19 01:17:10.988082] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.443 [2024-11-19 01:17:10.988091] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.443 [2024-11-19 01:17:10.995547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-11-19 01:17:11.007760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.443 [2024-11-19 01:17:11.007827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.443 [2024-11-19 01:17:11.007852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.443 [2024-11-19 01:17:11.007863] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.443 [2024-11-19 01:17:11.007874] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.443 [2024-11-19 01:17:11.015433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.443 qpair failed and we were unable to recover it. 00:33:04.443 [2024-11-19 01:17:11.028024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.443 [2024-11-19 01:17:11.028100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.443 [2024-11-19 01:17:11.028122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.444 [2024-11-19 01:17:11.028138] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.444 [2024-11-19 01:17:11.028149] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.444 [2024-11-19 01:17:11.036838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-11-19 01:17:11.048062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.444 [2024-11-19 01:17:11.048124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.444 [2024-11-19 01:17:11.048149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.444 [2024-11-19 01:17:11.048161] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.444 [2024-11-19 01:17:11.048175] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.444 [2024-11-19 01:17:11.055750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-11-19 01:17:11.068059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.444 [2024-11-19 01:17:11.068125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.444 [2024-11-19 01:17:11.068148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.444 [2024-11-19 01:17:11.068161] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.444 [2024-11-19 01:17:11.068170] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.444 [2024-11-19 01:17:11.075741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-11-19 01:17:11.088186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.444 [2024-11-19 01:17:11.088251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.444 [2024-11-19 01:17:11.088276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.444 [2024-11-19 01:17:11.088287] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.444 [2024-11-19 01:17:11.088304] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.444 [2024-11-19 01:17:11.095748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-11-19 01:17:11.108274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.444 [2024-11-19 01:17:11.108351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.444 [2024-11-19 01:17:11.108374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.444 [2024-11-19 01:17:11.108388] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.444 [2024-11-19 01:17:11.108397] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.444 [2024-11-19 01:17:11.115911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.444 qpair failed and we were unable to recover it. 00:33:04.444 [2024-11-19 01:17:11.128333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.444 [2024-11-19 01:17:11.128402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.444 [2024-11-19 01:17:11.128426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.444 [2024-11-19 01:17:11.128438] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.444 [2024-11-19 01:17:11.128449] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.704 [2024-11-19 01:17:11.135939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.704 qpair failed and we were unable to recover it. 00:33:04.704 [2024-11-19 01:17:11.148442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.704 [2024-11-19 01:17:11.148513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.704 [2024-11-19 01:17:11.148536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.704 [2024-11-19 01:17:11.148550] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.704 [2024-11-19 01:17:11.148559] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.704 [2024-11-19 01:17:11.156030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.704 qpair failed and we were unable to recover it. 00:33:04.704 [2024-11-19 01:17:11.168511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.704 [2024-11-19 01:17:11.168575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.704 [2024-11-19 01:17:11.168603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.704 [2024-11-19 01:17:11.168615] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.704 [2024-11-19 01:17:11.168628] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.704 [2024-11-19 01:17:11.176054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.704 qpair failed and we were unable to recover it. 00:33:04.704 [2024-11-19 01:17:11.188389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.704 [2024-11-19 01:17:11.188465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.704 [2024-11-19 01:17:11.188487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.704 [2024-11-19 01:17:11.188500] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.704 [2024-11-19 01:17:11.188510] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.704 [2024-11-19 01:17:11.196141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.704 qpair failed and we were unable to recover it. 00:33:04.704 [2024-11-19 01:17:11.208632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.704 [2024-11-19 01:17:11.208700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.704 [2024-11-19 01:17:11.208724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.704 [2024-11-19 01:17:11.208737] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.704 [2024-11-19 01:17:11.208749] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.704 [2024-11-19 01:17:11.216224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.704 qpair failed and we were unable to recover it. 00:33:04.704 [2024-11-19 01:17:11.228584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.704 [2024-11-19 01:17:11.228654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.704 [2024-11-19 01:17:11.228676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.704 [2024-11-19 01:17:11.228690] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.704 [2024-11-19 01:17:11.228699] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.704 [2024-11-19 01:17:11.236191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.704 qpair failed and we were unable to recover it. 00:33:04.704 [2024-11-19 01:17:11.248737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.704 [2024-11-19 01:17:11.248804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.705 [2024-11-19 01:17:11.248828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.705 [2024-11-19 01:17:11.248840] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.705 [2024-11-19 01:17:11.248851] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.705 [2024-11-19 01:17:11.256365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.705 qpair failed and we were unable to recover it. 00:33:04.705 [2024-11-19 01:17:11.268752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.705 [2024-11-19 01:17:11.268822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.705 [2024-11-19 01:17:11.268844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.705 [2024-11-19 01:17:11.268857] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.705 [2024-11-19 01:17:11.268866] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.705 [2024-11-19 01:17:11.276392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.705 qpair failed and we were unable to recover it. 00:33:04.705 [2024-11-19 01:17:11.288775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.705 [2024-11-19 01:17:11.288841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.705 [2024-11-19 01:17:11.288866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.705 [2024-11-19 01:17:11.288879] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.705 [2024-11-19 01:17:11.288890] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.705 [2024-11-19 01:17:11.297752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.705 qpair failed and we were unable to recover it. 00:33:04.705 [2024-11-19 01:17:11.308913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.705 [2024-11-19 01:17:11.308990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.705 [2024-11-19 01:17:11.309015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.705 [2024-11-19 01:17:11.309030] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.705 [2024-11-19 01:17:11.309039] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.705 [2024-11-19 01:17:11.316450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.705 qpair failed and we were unable to recover it. 00:33:04.705 [2024-11-19 01:17:11.329009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.705 [2024-11-19 01:17:11.329076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.705 [2024-11-19 01:17:11.329101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.705 [2024-11-19 01:17:11.329113] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.705 [2024-11-19 01:17:11.329123] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.705 [2024-11-19 01:17:11.336513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.705 qpair failed and we were unable to recover it. 00:33:04.705 [2024-11-19 01:17:11.349011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.705 [2024-11-19 01:17:11.349084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.705 [2024-11-19 01:17:11.349107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.705 [2024-11-19 01:17:11.349123] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.705 [2024-11-19 01:17:11.349132] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.705 [2024-11-19 01:17:11.356700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.705 qpair failed and we were unable to recover it. 00:33:04.705 [2024-11-19 01:17:11.369135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.705 [2024-11-19 01:17:11.369195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.705 [2024-11-19 01:17:11.369219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.705 [2024-11-19 01:17:11.369231] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.705 [2024-11-19 01:17:11.369242] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.705 [2024-11-19 01:17:11.376708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.705 qpair failed and we were unable to recover it. 00:33:04.705 [2024-11-19 01:17:11.389205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.705 [2024-11-19 01:17:11.389280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.705 [2024-11-19 01:17:11.389308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.705 [2024-11-19 01:17:11.389321] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.705 [2024-11-19 01:17:11.389333] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.966 [2024-11-19 01:17:11.396720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.966 qpair failed and we were unable to recover it. 00:33:04.966 [2024-11-19 01:17:11.409008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.966 [2024-11-19 01:17:11.409075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.966 [2024-11-19 01:17:11.409100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.966 [2024-11-19 01:17:11.409112] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.966 [2024-11-19 01:17:11.409123] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.966 [2024-11-19 01:17:11.416822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.966 qpair failed and we were unable to recover it. 00:33:04.966 [2024-11-19 01:17:11.430155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.966 [2024-11-19 01:17:11.430226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.966 [2024-11-19 01:17:11.430249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.966 [2024-11-19 01:17:11.430263] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.966 [2024-11-19 01:17:11.430272] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.966 [2024-11-19 01:17:11.436857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.966 qpair failed and we were unable to recover it. 00:33:04.966 [2024-11-19 01:17:11.449343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.966 [2024-11-19 01:17:11.449407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.966 [2024-11-19 01:17:11.449432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.966 [2024-11-19 01:17:11.449444] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.966 [2024-11-19 01:17:11.449455] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.966 [2024-11-19 01:17:11.456932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.966 qpair failed and we were unable to recover it. 00:33:04.966 [2024-11-19 01:17:11.469369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.966 [2024-11-19 01:17:11.469435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.966 [2024-11-19 01:17:11.469457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.966 [2024-11-19 01:17:11.469471] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.966 [2024-11-19 01:17:11.469480] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.966 [2024-11-19 01:17:11.476965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.966 qpair failed and we were unable to recover it. 00:33:04.966 [2024-11-19 01:17:11.489624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.966 [2024-11-19 01:17:11.489687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.966 [2024-11-19 01:17:11.489714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.966 [2024-11-19 01:17:11.489726] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.966 [2024-11-19 01:17:11.489736] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.966 [2024-11-19 01:17:11.497164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.966 qpair failed and we were unable to recover it. 00:33:04.966 [2024-11-19 01:17:11.509647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.966 [2024-11-19 01:17:11.509718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.966 [2024-11-19 01:17:11.509741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.966 [2024-11-19 01:17:11.509754] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.966 [2024-11-19 01:17:11.509764] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.966 [2024-11-19 01:17:11.517256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.966 qpair failed and we were unable to recover it. 00:33:04.966 [2024-11-19 01:17:11.529788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.966 [2024-11-19 01:17:11.529854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.966 [2024-11-19 01:17:11.529879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.966 [2024-11-19 01:17:11.529891] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.966 [2024-11-19 01:17:11.529904] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.966 [2024-11-19 01:17:11.537260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.966 qpair failed and we were unable to recover it. 00:33:04.966 [2024-11-19 01:17:11.549854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.966 [2024-11-19 01:17:11.549922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.966 [2024-11-19 01:17:11.549945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.966 [2024-11-19 01:17:11.549957] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.966 [2024-11-19 01:17:11.549966] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.966 [2024-11-19 01:17:11.557271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.966 qpair failed and we were unable to recover it. 00:33:04.966 [2024-11-19 01:17:11.569892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.966 [2024-11-19 01:17:11.569956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.966 [2024-11-19 01:17:11.569979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.966 [2024-11-19 01:17:11.569991] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.966 [2024-11-19 01:17:11.570001] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.966 [2024-11-19 01:17:11.577391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.966 qpair failed and we were unable to recover it. 00:33:04.966 [2024-11-19 01:17:11.589979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.966 [2024-11-19 01:17:11.590047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.966 [2024-11-19 01:17:11.590070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.966 [2024-11-19 01:17:11.590081] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.966 [2024-11-19 01:17:11.590090] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.967 [2024-11-19 01:17:11.597450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.967 qpair failed and we were unable to recover it. 00:33:04.967 [2024-11-19 01:17:11.609953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.967 [2024-11-19 01:17:11.610016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.967 [2024-11-19 01:17:11.610038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.967 [2024-11-19 01:17:11.610050] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.967 [2024-11-19 01:17:11.610059] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.967 [2024-11-19 01:17:11.617466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.967 qpair failed and we were unable to recover it. 00:33:04.967 [2024-11-19 01:17:11.630056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.967 [2024-11-19 01:17:11.630126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.967 [2024-11-19 01:17:11.630149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.967 [2024-11-19 01:17:11.630160] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.967 [2024-11-19 01:17:11.630170] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:04.967 [2024-11-19 01:17:11.637525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:04.967 qpair failed and we were unable to recover it. 00:33:04.967 [2024-11-19 01:17:11.650104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:04.967 [2024-11-19 01:17:11.650165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:04.967 [2024-11-19 01:17:11.650187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:04.967 [2024-11-19 01:17:11.650202] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:04.967 [2024-11-19 01:17:11.650212] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.227 [2024-11-19 01:17:11.657608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.227 qpair failed and we were unable to recover it. 00:33:05.227 [2024-11-19 01:17:11.670140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.227 [2024-11-19 01:17:11.670201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.227 [2024-11-19 01:17:11.670224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.227 [2024-11-19 01:17:11.670236] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.227 [2024-11-19 01:17:11.670245] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.227 [2024-11-19 01:17:11.677680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.227 qpair failed and we were unable to recover it. 00:33:05.227 [2024-11-19 01:17:11.690200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.227 [2024-11-19 01:17:11.690255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.227 [2024-11-19 01:17:11.690277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.227 [2024-11-19 01:17:11.690289] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.227 [2024-11-19 01:17:11.690305] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.227 [2024-11-19 01:17:11.697722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.227 qpair failed and we were unable to recover it. 00:33:05.227 [2024-11-19 01:17:11.710078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.227 [2024-11-19 01:17:11.710144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.227 [2024-11-19 01:17:11.710167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.227 [2024-11-19 01:17:11.710179] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.227 [2024-11-19 01:17:11.710188] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.227 [2024-11-19 01:17:11.717656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.227 qpair failed and we were unable to recover it. 00:33:05.227 [2024-11-19 01:17:11.730117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.227 [2024-11-19 01:17:11.730177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.227 [2024-11-19 01:17:11.730200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.227 [2024-11-19 01:17:11.730212] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.227 [2024-11-19 01:17:11.730224] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.227 [2024-11-19 01:17:11.737783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.227 qpair failed and we were unable to recover it. 00:33:05.227 [2024-11-19 01:17:11.750426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.227 [2024-11-19 01:17:11.750489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.227 [2024-11-19 01:17:11.750512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.227 [2024-11-19 01:17:11.750523] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.227 [2024-11-19 01:17:11.750532] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.227 [2024-11-19 01:17:11.757891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.227 qpair failed and we were unable to recover it. 00:33:05.227 [2024-11-19 01:17:11.770480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.227 [2024-11-19 01:17:11.770544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.227 [2024-11-19 01:17:11.770567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.227 [2024-11-19 01:17:11.770578] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.227 [2024-11-19 01:17:11.770588] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.227 [2024-11-19 01:17:11.777989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.227 qpair failed and we were unable to recover it. 00:33:05.227 [2024-11-19 01:17:11.790479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.227 [2024-11-19 01:17:11.790545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.227 [2024-11-19 01:17:11.790568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.227 [2024-11-19 01:17:11.790579] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.227 [2024-11-19 01:17:11.790588] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.227 [2024-11-19 01:17:11.798029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.227 qpair failed and we were unable to recover it. 00:33:05.227 [2024-11-19 01:17:11.810598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.227 [2024-11-19 01:17:11.810659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.227 [2024-11-19 01:17:11.810682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.227 [2024-11-19 01:17:11.810693] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.227 [2024-11-19 01:17:11.810702] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.227 [2024-11-19 01:17:11.818085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.227 qpair failed and we were unable to recover it. 00:33:05.227 [2024-11-19 01:17:11.830659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.227 [2024-11-19 01:17:11.830725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.227 [2024-11-19 01:17:11.830748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.227 [2024-11-19 01:17:11.830760] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.227 [2024-11-19 01:17:11.830769] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.227 [2024-11-19 01:17:11.838140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.227 qpair failed and we were unable to recover it. 00:33:05.227 [2024-11-19 01:17:11.850700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.227 [2024-11-19 01:17:11.850763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.227 [2024-11-19 01:17:11.850785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.227 [2024-11-19 01:17:11.850796] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.227 [2024-11-19 01:17:11.850806] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.227 [2024-11-19 01:17:11.858140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.227 qpair failed and we were unable to recover it. 00:33:05.227 [2024-11-19 01:17:11.870609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.228 [2024-11-19 01:17:11.870675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.228 [2024-11-19 01:17:11.870698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.228 [2024-11-19 01:17:11.870710] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.228 [2024-11-19 01:17:11.870719] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.228 [2024-11-19 01:17:11.878102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.228 qpair failed and we were unable to recover it. 00:33:05.228 [2024-11-19 01:17:11.890788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.228 [2024-11-19 01:17:11.890847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.228 [2024-11-19 01:17:11.890869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.228 [2024-11-19 01:17:11.890881] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.228 [2024-11-19 01:17:11.890890] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.228 [2024-11-19 01:17:11.898214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.228 qpair failed and we were unable to recover it. 00:33:05.228 [2024-11-19 01:17:11.910947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.228 [2024-11-19 01:17:11.911018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.228 [2024-11-19 01:17:11.911043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.228 [2024-11-19 01:17:11.911055] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.228 [2024-11-19 01:17:11.911064] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.487 [2024-11-19 01:17:11.918415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.487 qpair failed and we were unable to recover it. 00:33:05.487 [2024-11-19 01:17:11.930955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.487 [2024-11-19 01:17:11.931018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.487 [2024-11-19 01:17:11.931040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.487 [2024-11-19 01:17:11.931052] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.487 [2024-11-19 01:17:11.931060] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.487 [2024-11-19 01:17:11.938390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.487 qpair failed and we were unable to recover it. 00:33:05.487 [2024-11-19 01:17:11.951039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.487 [2024-11-19 01:17:11.951112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.487 [2024-11-19 01:17:11.951133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.487 [2024-11-19 01:17:11.951144] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.487 [2024-11-19 01:17:11.951153] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.487 [2024-11-19 01:17:11.960049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.487 qpair failed and we were unable to recover it. 00:33:05.487 [2024-11-19 01:17:11.970996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.487 [2024-11-19 01:17:11.971063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.487 [2024-11-19 01:17:11.971085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.487 [2024-11-19 01:17:11.971097] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.488 [2024-11-19 01:17:11.971106] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.488 [2024-11-19 01:17:11.978605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.488 qpair failed and we were unable to recover it. 00:33:05.488 [2024-11-19 01:17:11.991201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.488 [2024-11-19 01:17:11.991267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.488 [2024-11-19 01:17:11.991289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.488 [2024-11-19 01:17:11.991309] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.488 [2024-11-19 01:17:11.991319] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.488 [2024-11-19 01:17:11.998558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.488 qpair failed and we were unable to recover it. 00:33:05.488 [2024-11-19 01:17:12.011141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.488 [2024-11-19 01:17:12.011200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.488 [2024-11-19 01:17:12.011222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.488 [2024-11-19 01:17:12.011233] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.488 [2024-11-19 01:17:12.011242] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.488 [2024-11-19 01:17:12.018635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.488 qpair failed and we were unable to recover it. 00:33:05.488 [2024-11-19 01:17:12.031243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.488 [2024-11-19 01:17:12.031311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.488 [2024-11-19 01:17:12.031334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.488 [2024-11-19 01:17:12.031345] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.488 [2024-11-19 01:17:12.031354] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.488 [2024-11-19 01:17:12.038716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.488 qpair failed and we were unable to recover it. 00:33:05.488 [2024-11-19 01:17:12.051401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.488 [2024-11-19 01:17:12.051463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.488 [2024-11-19 01:17:12.051485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.488 [2024-11-19 01:17:12.051497] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.488 [2024-11-19 01:17:12.051506] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.488 [2024-11-19 01:17:12.058720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.488 qpair failed and we were unable to recover it. 00:33:05.488 [2024-11-19 01:17:12.071328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.488 [2024-11-19 01:17:12.071391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.488 [2024-11-19 01:17:12.071413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.488 [2024-11-19 01:17:12.071425] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.488 [2024-11-19 01:17:12.071434] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.488 [2024-11-19 01:17:12.078843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.488 qpair failed and we were unable to recover it. 00:33:05.488 [2024-11-19 01:17:12.092902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.488 [2024-11-19 01:17:12.092964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.488 [2024-11-19 01:17:12.092987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.488 [2024-11-19 01:17:12.092999] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.488 [2024-11-19 01:17:12.093008] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.488 [2024-11-19 01:17:12.098847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.488 qpair failed and we were unable to recover it. 00:33:05.488 [2024-11-19 01:17:12.111488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.488 [2024-11-19 01:17:12.111557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.488 [2024-11-19 01:17:12.111580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.488 [2024-11-19 01:17:12.111591] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.488 [2024-11-19 01:17:12.111600] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.488 [2024-11-19 01:17:12.118986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.488 qpair failed and we were unable to recover it. 00:33:05.488 [2024-11-19 01:17:12.131571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.488 [2024-11-19 01:17:12.131632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.488 [2024-11-19 01:17:12.131654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.488 [2024-11-19 01:17:12.131667] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.488 [2024-11-19 01:17:12.131676] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.488 [2024-11-19 01:17:12.139068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.488 qpair failed and we were unable to recover it. 00:33:05.488 [2024-11-19 01:17:12.151729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.488 [2024-11-19 01:17:12.151796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.488 [2024-11-19 01:17:12.151818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.488 [2024-11-19 01:17:12.151830] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.488 [2024-11-19 01:17:12.151839] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.488 [2024-11-19 01:17:12.159056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.488 qpair failed and we were unable to recover it. 00:33:05.488 [2024-11-19 01:17:12.171677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.488 [2024-11-19 01:17:12.171736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.488 [2024-11-19 01:17:12.171758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.488 [2024-11-19 01:17:12.171769] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.488 [2024-11-19 01:17:12.171779] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.748 [2024-11-19 01:17:12.179163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-11-19 01:17:12.191723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.748 [2024-11-19 01:17:12.191784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.748 [2024-11-19 01:17:12.191806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.748 [2024-11-19 01:17:12.191818] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.748 [2024-11-19 01:17:12.191827] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.748 [2024-11-19 01:17:12.199081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-11-19 01:17:12.211813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.748 [2024-11-19 01:17:12.211877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.748 [2024-11-19 01:17:12.211900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.748 [2024-11-19 01:17:12.211911] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.748 [2024-11-19 01:17:12.211920] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.748 [2024-11-19 01:17:12.219257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-11-19 01:17:12.231931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.748 [2024-11-19 01:17:12.232002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.748 [2024-11-19 01:17:12.232026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.748 [2024-11-19 01:17:12.232037] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.748 [2024-11-19 01:17:12.232046] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.748 [2024-11-19 01:17:12.239344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-11-19 01:17:12.251932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.748 [2024-11-19 01:17:12.251992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.748 [2024-11-19 01:17:12.252019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.748 [2024-11-19 01:17:12.252030] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.748 [2024-11-19 01:17:12.252039] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.748 [2024-11-19 01:17:12.259377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-11-19 01:17:12.271998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.748 [2024-11-19 01:17:12.272071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.748 [2024-11-19 01:17:12.272094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.748 [2024-11-19 01:17:12.272105] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.748 [2024-11-19 01:17:12.272114] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.748 [2024-11-19 01:17:12.279424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-11-19 01:17:12.292153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.748 [2024-11-19 01:17:12.292217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.748 [2024-11-19 01:17:12.292239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.748 [2024-11-19 01:17:12.292251] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.748 [2024-11-19 01:17:12.292260] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.748 [2024-11-19 01:17:12.299565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-11-19 01:17:12.311996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.748 [2024-11-19 01:17:12.312065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.749 [2024-11-19 01:17:12.312087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.749 [2024-11-19 01:17:12.312099] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.749 [2024-11-19 01:17:12.312108] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.749 [2024-11-19 01:17:12.319588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-11-19 01:17:12.332173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.749 [2024-11-19 01:17:12.332232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.749 [2024-11-19 01:17:12.332255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.749 [2024-11-19 01:17:12.332271] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.749 [2024-11-19 01:17:12.332280] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.749 [2024-11-19 01:17:12.339624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-11-19 01:17:12.352255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.749 [2024-11-19 01:17:12.352326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.749 [2024-11-19 01:17:12.352349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.749 [2024-11-19 01:17:12.352360] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.749 [2024-11-19 01:17:12.352369] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.749 [2024-11-19 01:17:12.359692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-11-19 01:17:12.372245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.749 [2024-11-19 01:17:12.372310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.749 [2024-11-19 01:17:12.372333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.749 [2024-11-19 01:17:12.372344] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.749 [2024-11-19 01:17:12.372354] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.749 [2024-11-19 01:17:12.379693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-11-19 01:17:12.392393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.749 [2024-11-19 01:17:12.392460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.749 [2024-11-19 01:17:12.392481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.749 [2024-11-19 01:17:12.392493] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.749 [2024-11-19 01:17:12.392502] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.749 [2024-11-19 01:17:12.399856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-11-19 01:17:12.412332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.749 [2024-11-19 01:17:12.412394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.749 [2024-11-19 01:17:12.412416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.749 [2024-11-19 01:17:12.412428] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.749 [2024-11-19 01:17:12.412436] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:05.749 [2024-11-19 01:17:12.419773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-11-19 01:17:12.432385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:05.749 [2024-11-19 01:17:12.432449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:05.749 [2024-11-19 01:17:12.432471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:05.749 [2024-11-19 01:17:12.432483] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:05.749 [2024-11-19 01:17:12.432491] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.008 [2024-11-19 01:17:12.439993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.008 qpair failed and we were unable to recover it. 00:33:06.008 [2024-11-19 01:17:12.452577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.008 [2024-11-19 01:17:12.452639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.008 [2024-11-19 01:17:12.452662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.008 [2024-11-19 01:17:12.452673] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.008 [2024-11-19 01:17:12.452682] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.008 [2024-11-19 01:17:12.459918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.008 qpair failed and we were unable to recover it. 00:33:06.008 [2024-11-19 01:17:12.472655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.008 [2024-11-19 01:17:12.472743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.008 [2024-11-19 01:17:12.472766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.008 [2024-11-19 01:17:12.472778] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.008 [2024-11-19 01:17:12.472787] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.008 [2024-11-19 01:17:12.479992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.008 qpair failed and we were unable to recover it. 00:33:06.008 [2024-11-19 01:17:12.492630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.008 [2024-11-19 01:17:12.492689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.008 [2024-11-19 01:17:12.492712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.008 [2024-11-19 01:17:12.492723] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.008 [2024-11-19 01:17:12.492732] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.008 [2024-11-19 01:17:12.500042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.008 qpair failed and we were unable to recover it. 00:33:06.008 [2024-11-19 01:17:12.512630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.008 [2024-11-19 01:17:12.512691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.008 [2024-11-19 01:17:12.512713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.008 [2024-11-19 01:17:12.512725] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.008 [2024-11-19 01:17:12.512734] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.008 [2024-11-19 01:17:12.520110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.008 qpair failed and we were unable to recover it. 00:33:06.008 [2024-11-19 01:17:12.532673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.008 [2024-11-19 01:17:12.532734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.008 [2024-11-19 01:17:12.532756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.008 [2024-11-19 01:17:12.532768] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.008 [2024-11-19 01:17:12.532777] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.008 [2024-11-19 01:17:12.540341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.008 qpair failed and we were unable to recover it. 00:33:06.008 [2024-11-19 01:17:12.552829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.008 [2024-11-19 01:17:12.552899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.008 [2024-11-19 01:17:12.552922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.008 [2024-11-19 01:17:12.552933] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.008 [2024-11-19 01:17:12.552942] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.008 [2024-11-19 01:17:12.560365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.008 qpair failed and we were unable to recover it. 00:33:06.008 [2024-11-19 01:17:12.572622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.008 [2024-11-19 01:17:12.572686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.008 [2024-11-19 01:17:12.572709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.008 [2024-11-19 01:17:12.572721] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.008 [2024-11-19 01:17:12.572730] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.008 [2024-11-19 01:17:12.580225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.008 qpair failed and we were unable to recover it. 00:33:06.008 [2024-11-19 01:17:12.592721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.008 [2024-11-19 01:17:12.592794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.008 [2024-11-19 01:17:12.592820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.008 [2024-11-19 01:17:12.592831] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.008 [2024-11-19 01:17:12.592840] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.009 [2024-11-19 01:17:12.600265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.009 qpair failed and we were unable to recover it. 00:33:06.009 [2024-11-19 01:17:12.612678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.009 [2024-11-19 01:17:12.612739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.009 [2024-11-19 01:17:12.612761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.009 [2024-11-19 01:17:12.612772] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.009 [2024-11-19 01:17:12.612782] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.009 [2024-11-19 01:17:12.620492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.009 qpair failed and we were unable to recover it. 00:33:06.009 [2024-11-19 01:17:12.632962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.009 [2024-11-19 01:17:12.633023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.009 [2024-11-19 01:17:12.633045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.009 [2024-11-19 01:17:12.633057] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.009 [2024-11-19 01:17:12.633066] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.009 [2024-11-19 01:17:12.640424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.009 qpair failed and we were unable to recover it. 00:33:06.009 [2024-11-19 01:17:12.652903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.009 [2024-11-19 01:17:12.652964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.009 [2024-11-19 01:17:12.652986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.009 [2024-11-19 01:17:12.652997] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.009 [2024-11-19 01:17:12.653006] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.009 [2024-11-19 01:17:12.660537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.009 qpair failed and we were unable to recover it. 00:33:06.009 [2024-11-19 01:17:12.672987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.009 [2024-11-19 01:17:12.673054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.009 [2024-11-19 01:17:12.673076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.009 [2024-11-19 01:17:12.673088] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.009 [2024-11-19 01:17:12.673100] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.009 [2024-11-19 01:17:12.680592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.009 qpair failed and we were unable to recover it. 00:33:06.009 [2024-11-19 01:17:12.693047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.009 [2024-11-19 01:17:12.693106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.009 [2024-11-19 01:17:12.693128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.009 [2024-11-19 01:17:12.693139] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.009 [2024-11-19 01:17:12.693149] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.268 [2024-11-19 01:17:12.700584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.268 qpair failed and we were unable to recover it. 00:33:06.268 [2024-11-19 01:17:12.713056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.268 [2024-11-19 01:17:12.713122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.268 [2024-11-19 01:17:12.713146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.268 [2024-11-19 01:17:12.713158] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.268 [2024-11-19 01:17:12.713167] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.268 [2024-11-19 01:17:12.720704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.268 qpair failed and we were unable to recover it. 00:33:06.268 [2024-11-19 01:17:12.733158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.268 [2024-11-19 01:17:12.733220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.268 [2024-11-19 01:17:12.733244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.268 [2024-11-19 01:17:12.733256] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.268 [2024-11-19 01:17:12.733265] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.268 [2024-11-19 01:17:12.740738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.268 qpair failed and we were unable to recover it. 00:33:06.268 [2024-11-19 01:17:12.753284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.268 [2024-11-19 01:17:12.753360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.268 [2024-11-19 01:17:12.753384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.268 [2024-11-19 01:17:12.753396] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.268 [2024-11-19 01:17:12.753405] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.268 [2024-11-19 01:17:12.760634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.268 qpair failed and we were unable to recover it. 00:33:06.268 [2024-11-19 01:17:12.773241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.268 [2024-11-19 01:17:12.773307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.268 [2024-11-19 01:17:12.773330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.268 [2024-11-19 01:17:12.773342] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.268 [2024-11-19 01:17:12.773351] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.268 [2024-11-19 01:17:12.780847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.268 qpair failed and we were unable to recover it. 00:33:06.268 [2024-11-19 01:17:12.793362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.268 [2024-11-19 01:17:12.793424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.268 [2024-11-19 01:17:12.793447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.268 [2024-11-19 01:17:12.793459] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.268 [2024-11-19 01:17:12.793467] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.268 [2024-11-19 01:17:12.800850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.268 qpair failed and we were unable to recover it. 00:33:06.268 [2024-11-19 01:17:12.813402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.269 [2024-11-19 01:17:12.813462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.269 [2024-11-19 01:17:12.813485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.269 [2024-11-19 01:17:12.813497] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.269 [2024-11-19 01:17:12.813506] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.269 [2024-11-19 01:17:12.820949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.269 qpair failed and we were unable to recover it. 00:33:06.269 [2024-11-19 01:17:12.833303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.269 [2024-11-19 01:17:12.833366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.269 [2024-11-19 01:17:12.833388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.269 [2024-11-19 01:17:12.833400] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.269 [2024-11-19 01:17:12.833410] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.269 [2024-11-19 01:17:12.841071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.269 qpair failed and we were unable to recover it. 00:33:06.269 [2024-11-19 01:17:12.853610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.269 [2024-11-19 01:17:12.853672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.269 [2024-11-19 01:17:12.853695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.269 [2024-11-19 01:17:12.853706] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.269 [2024-11-19 01:17:12.853715] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.269 [2024-11-19 01:17:12.861149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.269 qpair failed and we were unable to recover it. 00:33:06.269 [2024-11-19 01:17:12.873590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.269 [2024-11-19 01:17:12.873661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.269 [2024-11-19 01:17:12.873684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.269 [2024-11-19 01:17:12.873695] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.269 [2024-11-19 01:17:12.873704] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.269 [2024-11-19 01:17:12.881252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.269 qpair failed and we were unable to recover it. 00:33:06.269 [2024-11-19 01:17:12.893622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.269 [2024-11-19 01:17:12.893686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.269 [2024-11-19 01:17:12.893708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.269 [2024-11-19 01:17:12.893720] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.269 [2024-11-19 01:17:12.893729] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.269 [2024-11-19 01:17:12.901166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.269 qpair failed and we were unable to recover it. 00:33:06.269 [2024-11-19 01:17:12.913676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.269 [2024-11-19 01:17:12.913748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.269 [2024-11-19 01:17:12.913771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.269 [2024-11-19 01:17:12.913783] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.269 [2024-11-19 01:17:12.913792] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.269 [2024-11-19 01:17:12.921228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.269 qpair failed and we were unable to recover it. 00:33:06.269 [2024-11-19 01:17:12.933727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.269 [2024-11-19 01:17:12.933793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.269 [2024-11-19 01:17:12.933822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.269 [2024-11-19 01:17:12.933835] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.269 [2024-11-19 01:17:12.933844] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.269 [2024-11-19 01:17:12.941436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.269 qpair failed and we were unable to recover it. 00:33:06.269 [2024-11-19 01:17:12.953908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.269 [2024-11-19 01:17:12.953971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.269 [2024-11-19 01:17:12.953993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.269 [2024-11-19 01:17:12.954005] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.269 [2024-11-19 01:17:12.954016] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.530 [2024-11-19 01:17:12.961418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.530 qpair failed and we were unable to recover it. 00:33:06.530 [2024-11-19 01:17:12.973870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.530 [2024-11-19 01:17:12.973939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.530 [2024-11-19 01:17:12.973962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.530 [2024-11-19 01:17:12.973974] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.530 [2024-11-19 01:17:12.973983] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.530 [2024-11-19 01:17:12.981480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.530 qpair failed and we were unable to recover it. 00:33:06.530 [2024-11-19 01:17:12.993980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.530 [2024-11-19 01:17:12.994039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.530 [2024-11-19 01:17:12.994061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.530 [2024-11-19 01:17:12.994073] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.530 [2024-11-19 01:17:12.994083] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.530 [2024-11-19 01:17:13.001590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.530 qpair failed and we were unable to recover it. 00:33:06.530 [2024-11-19 01:17:13.014168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.530 [2024-11-19 01:17:13.014230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.530 [2024-11-19 01:17:13.014253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.530 [2024-11-19 01:17:13.014265] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.530 [2024-11-19 01:17:13.014278] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.530 [2024-11-19 01:17:13.021597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.530 qpair failed and we were unable to recover it. 00:33:06.530 [2024-11-19 01:17:13.034077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.530 [2024-11-19 01:17:13.034139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.530 [2024-11-19 01:17:13.034161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.530 [2024-11-19 01:17:13.034173] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.530 [2024-11-19 01:17:13.034182] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.530 [2024-11-19 01:17:13.041681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.530 qpair failed and we were unable to recover it. 00:33:06.530 [2024-11-19 01:17:13.054112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.530 [2024-11-19 01:17:13.054173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.530 [2024-11-19 01:17:13.054195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.530 [2024-11-19 01:17:13.054208] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.530 [2024-11-19 01:17:13.054217] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.530 [2024-11-19 01:17:13.061688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.530 qpair failed and we were unable to recover it. 00:33:06.530 [2024-11-19 01:17:13.074152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.530 [2024-11-19 01:17:13.074218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.530 [2024-11-19 01:17:13.074240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.530 [2024-11-19 01:17:13.074252] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.530 [2024-11-19 01:17:13.074261] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.530 [2024-11-19 01:17:13.081746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.530 qpair failed and we were unable to recover it. 00:33:06.530 [2024-11-19 01:17:13.094236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.530 [2024-11-19 01:17:13.094299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.530 [2024-11-19 01:17:13.094322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.530 [2024-11-19 01:17:13.094335] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.530 [2024-11-19 01:17:13.094344] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.530 [2024-11-19 01:17:13.101801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.530 qpair failed and we were unable to recover it. 00:33:06.530 [2024-11-19 01:17:13.114167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.530 [2024-11-19 01:17:13.114234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.530 [2024-11-19 01:17:13.114257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.530 [2024-11-19 01:17:13.114268] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.530 [2024-11-19 01:17:13.114277] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.530 [2024-11-19 01:17:13.121896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.530 qpair failed and we were unable to recover it. 00:33:06.530 [2024-11-19 01:17:13.134450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.530 [2024-11-19 01:17:13.134510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.530 [2024-11-19 01:17:13.134533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.530 [2024-11-19 01:17:13.134544] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.530 [2024-11-19 01:17:13.134553] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.530 [2024-11-19 01:17:13.141976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.531 qpair failed and we were unable to recover it. 00:33:06.531 [2024-11-19 01:17:13.154373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.531 [2024-11-19 01:17:13.154435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.531 [2024-11-19 01:17:13.154457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.531 [2024-11-19 01:17:13.154469] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.531 [2024-11-19 01:17:13.154478] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.531 [2024-11-19 01:17:13.162047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.531 qpair failed and we were unable to recover it. 00:33:06.531 [2024-11-19 01:17:13.174479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.531 [2024-11-19 01:17:13.174541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.531 [2024-11-19 01:17:13.174563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.531 [2024-11-19 01:17:13.174575] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.531 [2024-11-19 01:17:13.174584] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.531 [2024-11-19 01:17:13.182098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.531 qpair failed and we were unable to recover it. 00:33:06.531 [2024-11-19 01:17:13.194507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.531 [2024-11-19 01:17:13.194583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.531 [2024-11-19 01:17:13.194606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.531 [2024-11-19 01:17:13.194618] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.531 [2024-11-19 01:17:13.194627] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.531 [2024-11-19 01:17:13.202114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.531 qpair failed and we were unable to recover it. 00:33:06.531 [2024-11-19 01:17:13.214652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.531 [2024-11-19 01:17:13.214712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.531 [2024-11-19 01:17:13.214735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.531 [2024-11-19 01:17:13.214747] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.531 [2024-11-19 01:17:13.214756] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.791 [2024-11-19 01:17:13.222172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.791 qpair failed and we were unable to recover it. 00:33:06.791 [2024-11-19 01:17:13.234622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.791 [2024-11-19 01:17:13.234687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.791 [2024-11-19 01:17:13.234711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.791 [2024-11-19 01:17:13.234723] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.791 [2024-11-19 01:17:13.234732] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.791 [2024-11-19 01:17:13.242220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.791 qpair failed and we were unable to recover it. 00:33:06.791 [2024-11-19 01:17:13.254762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.791 [2024-11-19 01:17:13.254826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.791 [2024-11-19 01:17:13.254848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.791 [2024-11-19 01:17:13.254861] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.791 [2024-11-19 01:17:13.254871] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.791 [2024-11-19 01:17:13.262332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.791 qpair failed and we were unable to recover it. 00:33:06.791 [2024-11-19 01:17:13.274659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.791 [2024-11-19 01:17:13.274723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.791 [2024-11-19 01:17:13.274749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.791 [2024-11-19 01:17:13.274761] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.791 [2024-11-19 01:17:13.274770] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.791 [2024-11-19 01:17:13.282373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.791 qpair failed and we were unable to recover it. 00:33:06.791 [2024-11-19 01:17:13.294853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.791 [2024-11-19 01:17:13.294919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.791 [2024-11-19 01:17:13.294942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.791 [2024-11-19 01:17:13.294954] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.791 [2024-11-19 01:17:13.294964] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.791 [2024-11-19 01:17:13.302461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.791 qpair failed and we were unable to recover it. 00:33:06.791 [2024-11-19 01:17:13.314850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.791 [2024-11-19 01:17:13.314919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.791 [2024-11-19 01:17:13.314942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.791 [2024-11-19 01:17:13.314954] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.791 [2024-11-19 01:17:13.314963] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.791 [2024-11-19 01:17:13.322437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.791 qpair failed and we were unable to recover it. 00:33:06.791 [2024-11-19 01:17:13.334886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.791 [2024-11-19 01:17:13.334949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.791 [2024-11-19 01:17:13.334972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.791 [2024-11-19 01:17:13.334984] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.791 [2024-11-19 01:17:13.334993] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.791 [2024-11-19 01:17:13.342518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.791 qpair failed and we were unable to recover it. 00:33:06.791 [2024-11-19 01:17:13.355005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.792 [2024-11-19 01:17:13.355071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.792 [2024-11-19 01:17:13.355093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.792 [2024-11-19 01:17:13.355105] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.792 [2024-11-19 01:17:13.355117] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.792 [2024-11-19 01:17:13.362595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.792 qpair failed and we were unable to recover it. 00:33:06.792 [2024-11-19 01:17:13.375105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.792 [2024-11-19 01:17:13.375165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.792 [2024-11-19 01:17:13.375188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.792 [2024-11-19 01:17:13.375199] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.792 [2024-11-19 01:17:13.375209] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.792 [2024-11-19 01:17:13.382658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.792 qpair failed and we were unable to recover it. 00:33:06.792 [2024-11-19 01:17:13.395127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.792 [2024-11-19 01:17:13.395196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.792 [2024-11-19 01:17:13.395218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.792 [2024-11-19 01:17:13.395230] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.792 [2024-11-19 01:17:13.395239] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.792 [2024-11-19 01:17:13.402681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.792 qpair failed and we were unable to recover it. 00:33:06.792 [2024-11-19 01:17:13.415104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.792 [2024-11-19 01:17:13.415163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.792 [2024-11-19 01:17:13.415185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.792 [2024-11-19 01:17:13.415197] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.792 [2024-11-19 01:17:13.415207] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.792 [2024-11-19 01:17:13.422802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.792 qpair failed and we were unable to recover it. 00:33:06.792 [2024-11-19 01:17:13.435310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.792 [2024-11-19 01:17:13.435377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.792 [2024-11-19 01:17:13.435400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.792 [2024-11-19 01:17:13.435412] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.792 [2024-11-19 01:17:13.435422] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.792 [2024-11-19 01:17:13.442853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.792 qpair failed and we were unable to recover it. 00:33:06.792 [2024-11-19 01:17:13.455284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.792 [2024-11-19 01:17:13.455351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.792 [2024-11-19 01:17:13.455373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.792 [2024-11-19 01:17:13.455384] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.792 [2024-11-19 01:17:13.455394] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:06.792 [2024-11-19 01:17:13.462894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:06.792 qpair failed and we were unable to recover it. 00:33:06.792 [2024-11-19 01:17:13.475362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:06.792 [2024-11-19 01:17:13.475439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:06.792 [2024-11-19 01:17:13.475462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:06.792 [2024-11-19 01:17:13.475474] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:06.792 [2024-11-19 01:17:13.475483] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.051 [2024-11-19 01:17:13.482939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-11-19 01:17:13.495520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.051 [2024-11-19 01:17:13.495579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.051 [2024-11-19 01:17:13.495602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.051 [2024-11-19 01:17:13.495614] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.051 [2024-11-19 01:17:13.495623] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.051 [2024-11-19 01:17:13.502988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.515466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.052 [2024-11-19 01:17:13.515532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.052 [2024-11-19 01:17:13.515554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.052 [2024-11-19 01:17:13.515566] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.052 [2024-11-19 01:17:13.515575] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.052 [2024-11-19 01:17:13.523101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.535481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.052 [2024-11-19 01:17:13.535543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.052 [2024-11-19 01:17:13.535570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.052 [2024-11-19 01:17:13.535583] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.052 [2024-11-19 01:17:13.535592] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.052 [2024-11-19 01:17:13.543112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.555568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.052 [2024-11-19 01:17:13.555642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.052 [2024-11-19 01:17:13.555665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.052 [2024-11-19 01:17:13.555676] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.052 [2024-11-19 01:17:13.555686] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.052 [2024-11-19 01:17:13.563150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.575658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.052 [2024-11-19 01:17:13.575724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.052 [2024-11-19 01:17:13.575747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.052 [2024-11-19 01:17:13.575758] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.052 [2024-11-19 01:17:13.575767] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.052 [2024-11-19 01:17:13.583266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.595772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.052 [2024-11-19 01:17:13.595839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.052 [2024-11-19 01:17:13.595862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.052 [2024-11-19 01:17:13.595873] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.052 [2024-11-19 01:17:13.595882] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.052 [2024-11-19 01:17:13.603274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.615644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.052 [2024-11-19 01:17:13.615707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.052 [2024-11-19 01:17:13.615729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.052 [2024-11-19 01:17:13.615745] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.052 [2024-11-19 01:17:13.615754] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.052 [2024-11-19 01:17:13.623412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.635856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.052 [2024-11-19 01:17:13.635921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.052 [2024-11-19 01:17:13.635944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.052 [2024-11-19 01:17:13.635956] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.052 [2024-11-19 01:17:13.635965] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.052 [2024-11-19 01:17:13.643431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.658406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.052 [2024-11-19 01:17:13.658467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.052 [2024-11-19 01:17:13.658490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.052 [2024-11-19 01:17:13.658503] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.052 [2024-11-19 01:17:13.658512] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.052 [2024-11-19 01:17:13.663451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.676009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.052 [2024-11-19 01:17:13.676078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.052 [2024-11-19 01:17:13.676101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.052 [2024-11-19 01:17:13.676112] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.052 [2024-11-19 01:17:13.676122] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.052 [2024-11-19 01:17:13.683539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.695973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.052 [2024-11-19 01:17:13.696034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.052 [2024-11-19 01:17:13.696056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.052 [2024-11-19 01:17:13.696069] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.052 [2024-11-19 01:17:13.696078] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.052 [2024-11-19 01:17:13.703673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.716009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.052 [2024-11-19 01:17:13.716071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.052 [2024-11-19 01:17:13.716093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.052 [2024-11-19 01:17:13.716105] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.052 [2024-11-19 01:17:13.716114] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.052 [2024-11-19 01:17:13.723653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-11-19 01:17:13.736176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.053 [2024-11-19 01:17:13.736237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.053 [2024-11-19 01:17:13.736260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.053 [2024-11-19 01:17:13.736272] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.053 [2024-11-19 01:17:13.736281] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.312 [2024-11-19 01:17:13.743750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.313 qpair failed and we were unable to recover it. 00:33:07.313 [2024-11-19 01:17:13.756230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.313 [2024-11-19 01:17:13.756305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.313 [2024-11-19 01:17:13.756327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.313 [2024-11-19 01:17:13.756340] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.313 [2024-11-19 01:17:13.756349] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.313 [2024-11-19 01:17:13.763777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.313 qpair failed and we were unable to recover it. 00:33:07.313 [2024-11-19 01:17:13.776250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.313 [2024-11-19 01:17:13.776318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.313 [2024-11-19 01:17:13.776341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.313 [2024-11-19 01:17:13.776354] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.313 [2024-11-19 01:17:13.776363] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.313 [2024-11-19 01:17:13.786842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.313 qpair failed and we were unable to recover it. 00:33:07.313 [2024-11-19 01:17:13.796593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.313 [2024-11-19 01:17:13.796664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.313 [2024-11-19 01:17:13.796687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.313 [2024-11-19 01:17:13.796700] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.313 [2024-11-19 01:17:13.796710] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.313 [2024-11-19 01:17:13.804016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.313 qpair failed and we were unable to recover it. 00:33:07.313 [2024-11-19 01:17:13.816637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.313 [2024-11-19 01:17:13.816699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.313 [2024-11-19 01:17:13.816722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.313 [2024-11-19 01:17:13.816734] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.313 [2024-11-19 01:17:13.816744] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.313 [2024-11-19 01:17:13.823976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.313 qpair failed and we were unable to recover it. 00:33:07.313 [2024-11-19 01:17:13.836248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.313 [2024-11-19 01:17:13.836317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.313 [2024-11-19 01:17:13.836341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.313 [2024-11-19 01:17:13.836353] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.313 [2024-11-19 01:17:13.836362] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.313 [2024-11-19 01:17:13.843942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.313 qpair failed and we were unable to recover it. 00:33:07.313 [2024-11-19 01:17:13.856404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.313 [2024-11-19 01:17:13.856466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.313 [2024-11-19 01:17:13.856489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.313 [2024-11-19 01:17:13.856501] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.313 [2024-11-19 01:17:13.856510] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.313 [2024-11-19 01:17:13.864080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.313 qpair failed and we were unable to recover it. 00:33:07.313 [2024-11-19 01:17:13.876460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.313 [2024-11-19 01:17:13.876525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.313 [2024-11-19 01:17:13.876551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.313 [2024-11-19 01:17:13.876564] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.313 [2024-11-19 01:17:13.876573] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.313 [2024-11-19 01:17:13.884061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.313 qpair failed and we were unable to recover it. 00:33:07.313 [2024-11-19 01:17:13.896766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.313 [2024-11-19 01:17:13.896828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.313 [2024-11-19 01:17:13.896851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.313 [2024-11-19 01:17:13.896863] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.313 [2024-11-19 01:17:13.896873] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.313 [2024-11-19 01:17:13.904263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.313 qpair failed and we were unable to recover it. 00:33:07.313 [2024-11-19 01:17:13.917820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.313 [2024-11-19 01:17:13.917892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.313 [2024-11-19 01:17:13.917915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.313 [2024-11-19 01:17:13.917926] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.313 [2024-11-19 01:17:13.917936] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.313 [2024-11-19 01:17:13.924155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.313 qpair failed and we were unable to recover it. 00:33:07.313 [2024-11-19 01:17:13.936929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.313 [2024-11-19 01:17:13.936990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.313 [2024-11-19 01:17:13.937013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.313 [2024-11-19 01:17:13.937025] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.313 [2024-11-19 01:17:13.937034] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.313 [2024-11-19 01:17:13.944428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.313 qpair failed and we were unable to recover it. 00:33:07.313 [2024-11-19 01:17:13.956968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.313 [2024-11-19 01:17:13.957031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.313 [2024-11-19 01:17:13.957054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.313 [2024-11-19 01:17:13.957069] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.314 [2024-11-19 01:17:13.957078] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.314 [2024-11-19 01:17:13.964324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.314 qpair failed and we were unable to recover it. 00:33:07.314 [2024-11-19 01:17:13.976943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.314 [2024-11-19 01:17:13.977005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.314 [2024-11-19 01:17:13.977028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.314 [2024-11-19 01:17:13.977040] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.314 [2024-11-19 01:17:13.977049] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.314 [2024-11-19 01:17:13.984551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.314 qpair failed and we were unable to recover it. 00:33:07.314 [2024-11-19 01:17:13.997013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.314 [2024-11-19 01:17:13.997081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.314 [2024-11-19 01:17:13.997104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.314 [2024-11-19 01:17:13.997115] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.314 [2024-11-19 01:17:13.997123] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.573 [2024-11-19 01:17:14.004620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.573 qpair failed and we were unable to recover it. 00:33:07.573 [2024-11-19 01:17:14.017088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.573 [2024-11-19 01:17:14.017153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.573 [2024-11-19 01:17:14.017175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.573 [2024-11-19 01:17:14.017187] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.573 [2024-11-19 01:17:14.017197] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.573 [2024-11-19 01:17:14.024519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.573 qpair failed and we were unable to recover it. 00:33:07.573 [2024-11-19 01:17:14.037100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.573 [2024-11-19 01:17:14.037168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.573 [2024-11-19 01:17:14.037191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.573 [2024-11-19 01:17:14.037203] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.573 [2024-11-19 01:17:14.037212] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.573 [2024-11-19 01:17:14.047126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.573 qpair failed and we were unable to recover it. 00:33:07.573 [2024-11-19 01:17:14.057359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.573 [2024-11-19 01:17:14.057420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.573 [2024-11-19 01:17:14.057443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.573 [2024-11-19 01:17:14.057455] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.573 [2024-11-19 01:17:14.057465] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.573 [2024-11-19 01:17:14.064741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.573 qpair failed and we were unable to recover it. 00:33:07.573 [2024-11-19 01:17:14.077132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.574 [2024-11-19 01:17:14.077199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.574 [2024-11-19 01:17:14.077221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.574 [2024-11-19 01:17:14.077233] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.574 [2024-11-19 01:17:14.077242] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.574 [2024-11-19 01:17:14.084848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.574 qpair failed and we were unable to recover it. 00:33:07.574 [2024-11-19 01:17:14.097380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.574 [2024-11-19 01:17:14.097446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.574 [2024-11-19 01:17:14.097469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.574 [2024-11-19 01:17:14.097481] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.574 [2024-11-19 01:17:14.097490] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.574 [2024-11-19 01:17:14.104933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.574 qpair failed and we were unable to recover it. 00:33:07.574 [2024-11-19 01:17:14.117208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.574 [2024-11-19 01:17:14.117271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.574 [2024-11-19 01:17:14.117299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.574 [2024-11-19 01:17:14.117313] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.574 [2024-11-19 01:17:14.117323] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.574 [2024-11-19 01:17:14.124980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.574 qpair failed and we were unable to recover it. 00:33:07.574 [2024-11-19 01:17:14.137379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.574 [2024-11-19 01:17:14.137440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.574 [2024-11-19 01:17:14.137462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.574 [2024-11-19 01:17:14.137475] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.574 [2024-11-19 01:17:14.137484] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.574 [2024-11-19 01:17:14.144887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.574 qpair failed and we were unable to recover it. 00:33:07.574 [2024-11-19 01:17:14.157301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.574 [2024-11-19 01:17:14.157373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.574 [2024-11-19 01:17:14.157396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.574 [2024-11-19 01:17:14.157408] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.574 [2024-11-19 01:17:14.157417] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.574 [2024-11-19 01:17:14.165068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.574 qpair failed and we were unable to recover it. 00:33:07.574 [2024-11-19 01:17:14.180072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.574 [2024-11-19 01:17:14.180136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.574 [2024-11-19 01:17:14.180160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.574 [2024-11-19 01:17:14.180173] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.574 [2024-11-19 01:17:14.180183] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.574 [2024-11-19 01:17:14.185133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.574 qpair failed and we were unable to recover it. 00:33:07.574 [2024-11-19 01:17:14.197811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.574 [2024-11-19 01:17:14.197881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.574 [2024-11-19 01:17:14.197904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.574 [2024-11-19 01:17:14.197916] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.574 [2024-11-19 01:17:14.197925] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.574 [2024-11-19 01:17:14.205202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.574 qpair failed and we were unable to recover it. 00:33:07.574 [2024-11-19 01:17:14.217791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.574 [2024-11-19 01:17:14.217855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.574 [2024-11-19 01:17:14.217883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.574 [2024-11-19 01:17:14.217894] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.574 [2024-11-19 01:17:14.217904] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.574 [2024-11-19 01:17:14.225274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.574 qpair failed and we were unable to recover it. 00:33:07.574 [2024-11-19 01:17:14.237876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.574 [2024-11-19 01:17:14.237943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.574 [2024-11-19 01:17:14.237966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.574 [2024-11-19 01:17:14.237978] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.574 [2024-11-19 01:17:14.237987] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.574 [2024-11-19 01:17:14.245399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.574 qpair failed and we were unable to recover it. 00:33:07.575 [2024-11-19 01:17:14.257917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.575 [2024-11-19 01:17:14.257980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.575 [2024-11-19 01:17:14.258003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.575 [2024-11-19 01:17:14.258015] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.575 [2024-11-19 01:17:14.258023] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.834 [2024-11-19 01:17:14.265434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.834 qpair failed and we were unable to recover it. 00:33:07.834 [2024-11-19 01:17:14.277952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.834 [2024-11-19 01:17:14.278022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.834 [2024-11-19 01:17:14.278045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.834 [2024-11-19 01:17:14.278056] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.834 [2024-11-19 01:17:14.278065] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.834 [2024-11-19 01:17:14.285489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.834 qpair failed and we were unable to recover it. 00:33:07.834 [2024-11-19 01:17:14.298065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.834 [2024-11-19 01:17:14.298128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.834 [2024-11-19 01:17:14.298150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.834 [2024-11-19 01:17:14.298165] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.834 [2024-11-19 01:17:14.298174] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.834 [2024-11-19 01:17:14.305524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.834 qpair failed and we were unable to recover it. 00:33:07.834 [2024-11-19 01:17:14.318099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.834 [2024-11-19 01:17:14.318162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.834 [2024-11-19 01:17:14.318185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.834 [2024-11-19 01:17:14.318197] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.834 [2024-11-19 01:17:14.318206] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.834 [2024-11-19 01:17:14.325586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.834 qpair failed and we were unable to recover it. 00:33:07.834 [2024-11-19 01:17:14.338066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.834 [2024-11-19 01:17:14.338122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.834 [2024-11-19 01:17:14.338145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.834 [2024-11-19 01:17:14.338158] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.834 [2024-11-19 01:17:14.338167] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.834 [2024-11-19 01:17:14.345595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.834 qpair failed and we were unable to recover it. 00:33:07.834 [2024-11-19 01:17:14.358244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.834 [2024-11-19 01:17:14.358309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.834 [2024-11-19 01:17:14.358331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.834 [2024-11-19 01:17:14.358344] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.834 [2024-11-19 01:17:14.358353] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.834 [2024-11-19 01:17:14.365686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.834 qpair failed and we were unable to recover it. 00:33:07.834 [2024-11-19 01:17:14.378325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.834 [2024-11-19 01:17:14.378385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.834 [2024-11-19 01:17:14.378409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.834 [2024-11-19 01:17:14.378420] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.834 [2024-11-19 01:17:14.378430] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.834 [2024-11-19 01:17:14.385715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.834 qpair failed and we were unable to recover it. 00:33:07.834 [2024-11-19 01:17:14.398323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.834 [2024-11-19 01:17:14.398389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.835 [2024-11-19 01:17:14.398412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.835 [2024-11-19 01:17:14.398423] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.835 [2024-11-19 01:17:14.398433] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.835 [2024-11-19 01:17:14.405824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.835 qpair failed and we were unable to recover it. 00:33:07.835 [2024-11-19 01:17:14.418311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.835 [2024-11-19 01:17:14.418378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.835 [2024-11-19 01:17:14.418401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.835 [2024-11-19 01:17:14.418413] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.835 [2024-11-19 01:17:14.418423] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.835 [2024-11-19 01:17:14.425850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.835 qpair failed and we were unable to recover it. 00:33:07.835 [2024-11-19 01:17:14.438474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.835 [2024-11-19 01:17:14.438544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.835 [2024-11-19 01:17:14.438567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.835 [2024-11-19 01:17:14.438578] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.835 [2024-11-19 01:17:14.438587] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.835 [2024-11-19 01:17:14.445874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.835 qpair failed and we were unable to recover it. 00:33:07.835 [2024-11-19 01:17:14.458569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.835 [2024-11-19 01:17:14.458630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.835 [2024-11-19 01:17:14.458653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.835 [2024-11-19 01:17:14.458665] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.835 [2024-11-19 01:17:14.458675] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.835 [2024-11-19 01:17:14.466027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.835 qpair failed and we were unable to recover it. 00:33:07.835 [2024-11-19 01:17:14.478558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.835 [2024-11-19 01:17:14.478623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.835 [2024-11-19 01:17:14.478645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.835 [2024-11-19 01:17:14.478657] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.835 [2024-11-19 01:17:14.478666] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.835 [2024-11-19 01:17:14.486082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.835 qpair failed and we were unable to recover it. 00:33:07.835 [2024-11-19 01:17:14.498547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.835 [2024-11-19 01:17:14.498613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.835 [2024-11-19 01:17:14.498635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.835 [2024-11-19 01:17:14.498647] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.835 [2024-11-19 01:17:14.498656] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:07.835 [2024-11-19 01:17:14.506068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:07.835 qpair failed and we were unable to recover it. 00:33:07.835 [2024-11-19 01:17:14.518709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:07.835 [2024-11-19 01:17:14.518781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:07.835 [2024-11-19 01:17:14.518804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:07.835 [2024-11-19 01:17:14.518816] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:07.835 [2024-11-19 01:17:14.518825] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.094 [2024-11-19 01:17:14.526160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-11-19 01:17:14.538796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.094 [2024-11-19 01:17:14.538859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.094 [2024-11-19 01:17:14.538881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.094 [2024-11-19 01:17:14.538893] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.094 [2024-11-19 01:17:14.538902] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.094 [2024-11-19 01:17:14.546189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-11-19 01:17:14.558726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.094 [2024-11-19 01:17:14.558791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.094 [2024-11-19 01:17:14.558817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.094 [2024-11-19 01:17:14.558829] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.094 [2024-11-19 01:17:14.558838] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.094 [2024-11-19 01:17:14.566272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-11-19 01:17:14.578836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.094 [2024-11-19 01:17:14.578896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.094 [2024-11-19 01:17:14.578920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.094 [2024-11-19 01:17:14.578932] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.094 [2024-11-19 01:17:14.578942] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.095 [2024-11-19 01:17:14.586306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-11-19 01:17:14.598911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.095 [2024-11-19 01:17:14.598976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.095 [2024-11-19 01:17:14.598998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.095 [2024-11-19 01:17:14.599011] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.095 [2024-11-19 01:17:14.599020] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.095 [2024-11-19 01:17:14.606411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-11-19 01:17:14.618974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.095 [2024-11-19 01:17:14.619040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.095 [2024-11-19 01:17:14.619062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.095 [2024-11-19 01:17:14.619075] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.095 [2024-11-19 01:17:14.619084] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.095 [2024-11-19 01:17:14.626444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-11-19 01:17:14.639014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.095 [2024-11-19 01:17:14.639077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.095 [2024-11-19 01:17:14.639100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.095 [2024-11-19 01:17:14.639112] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.095 [2024-11-19 01:17:14.639125] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.095 [2024-11-19 01:17:14.646499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-11-19 01:17:14.659116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.095 [2024-11-19 01:17:14.659179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.095 [2024-11-19 01:17:14.659201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.095 [2024-11-19 01:17:14.659213] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.095 [2024-11-19 01:17:14.659222] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.095 [2024-11-19 01:17:14.666558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-11-19 01:17:14.679226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.095 [2024-11-19 01:17:14.679291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.095 [2024-11-19 01:17:14.679321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.095 [2024-11-19 01:17:14.679333] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.095 [2024-11-19 01:17:14.679342] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.095 [2024-11-19 01:17:14.686624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-11-19 01:17:14.699362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.095 [2024-11-19 01:17:14.699421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.095 [2024-11-19 01:17:14.699444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.095 [2024-11-19 01:17:14.699456] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.095 [2024-11-19 01:17:14.699465] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.095 [2024-11-19 01:17:14.708402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-11-19 01:17:14.719134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.095 [2024-11-19 01:17:14.719205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.095 [2024-11-19 01:17:14.719229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.095 [2024-11-19 01:17:14.719240] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.095 [2024-11-19 01:17:14.719250] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.095 [2024-11-19 01:17:14.726804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-11-19 01:17:14.739439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.095 [2024-11-19 01:17:14.739496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.095 [2024-11-19 01:17:14.739519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.095 [2024-11-19 01:17:14.739531] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.095 [2024-11-19 01:17:14.739540] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.095 [2024-11-19 01:17:14.746853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-11-19 01:17:14.759416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.095 [2024-11-19 01:17:14.759478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.095 [2024-11-19 01:17:14.759501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.095 [2024-11-19 01:17:14.759514] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.095 [2024-11-19 01:17:14.759524] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:08.095 [2024-11-19 01:17:14.766928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.663 [2024-11-19 01:17:15.288332] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Read completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 Write completed with error (sct=0, sc=8) 00:33:08.663 starting I/O failed 00:33:08.663 [2024-11-19 01:17:15.289377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:08.663 [2024-11-19 01:17:15.300942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.663 [2024-11-19 01:17:15.301026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.663 [2024-11-19 01:17:15.301052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.663 [2024-11-19 01:17:15.301067] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.663 [2024-11-19 01:17:15.301077] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:33:08.663 [2024-11-19 01:17:15.308481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:08.663 qpair failed and we were unable to recover it. 00:33:08.663 [2024-11-19 01:17:15.320901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:08.663 [2024-11-19 01:17:15.320981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:08.663 [2024-11-19 01:17:15.321007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:08.663 [2024-11-19 01:17:15.321021] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:08.663 [2024-11-19 01:17:15.321035] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:33:08.663 [2024-11-19 01:17:15.328473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:08.663 qpair failed and we were unable to recover it. 00:33:08.663 [2024-11-19 01:17:15.328753] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:33:08.663 A controller has encountered a failure and is being reset. 00:33:08.663 [2024-11-19 01:17:15.328920] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:08.664 [2024-11-19 01:17:15.329467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:33:08.664 Controller properly reset. 00:33:09.231 [2024-11-19 01:17:15.860337] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Write completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 Read completed with error (sct=0, sc=8) 00:33:09.231 starting I/O failed 00:33:09.231 [2024-11-19 01:17:15.861344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:09.799 [2024-11-19 01:17:16.435334] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Read completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 Write completed with error (sct=0, sc=8) 00:33:09.799 starting I/O failed 00:33:09.799 [2024-11-19 01:17:16.436419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:10.058 Initializing NVMe Controllers 00:33:10.058 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:10.058 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:10.058 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:10.058 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:10.058 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:10.058 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:10.058 Initialization complete. Launching workers. 00:33:10.058 Starting thread on core 1 00:33:10.058 Starting thread on core 2 00:33:10.058 Starting thread on core 0 00:33:10.058 Starting thread on core 3 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:10.058 00:33:10.058 real 0m13.328s 00:33:10.058 user 0m27.564s 00:33:10.058 sys 0m2.842s 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.058 ************************************ 00:33:10.058 END TEST nvmf_target_disconnect_tc2 00:33:10.058 ************************************ 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:10.058 ************************************ 00:33:10.058 START TEST nvmf_target_disconnect_tc3 00:33:10.058 ************************************ 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=539691 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:33:10.058 01:17:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:33:12.595 01:17:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 538478 00:33:12.595 01:17:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:33:13.164 [2024-11-19 01:17:19.571338] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Write completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 Read completed with error (sct=0, sc=8) 00:33:13.164 starting I/O failed 00:33:13.164 [2024-11-19 01:17:19.572439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:13.164 [2024-11-19 01:17:19.574473] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:13.164 [2024-11-19 01:17:19.574498] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:13.164 [2024-11-19 01:17:19.574512] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:14.099 [2024-11-19 01:17:20.577440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:14.099 qpair failed and we were unable to recover it. 00:33:14.099 [2024-11-19 01:17:20.579464] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:14.099 [2024-11-19 01:17:20.579490] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:14.099 [2024-11-19 01:17:20.579502] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:14.099 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 538478 Killed "${NVMF_APP[@]}" "$@" 00:33:14.099 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:33:14.099 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:14.099 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:14.099 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.099 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:14.099 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=540340 00:33:14.099 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 540340 00:33:14.100 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:14.100 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 540340 ']' 00:33:14.100 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.100 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.100 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.100 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.100 01:17:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:14.358 [2024-11-19 01:17:20.831393] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:14.358 [2024-11-19 01:17:20.831481] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.358 [2024-11-19 01:17:20.962762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:14.616 [2024-11-19 01:17:21.078494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.616 [2024-11-19 01:17:21.078542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.616 [2024-11-19 01:17:21.078553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.616 [2024-11-19 01:17:21.078564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.616 [2024-11-19 01:17:21.078572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.616 [2024-11-19 01:17:21.081130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:14.616 [2024-11-19 01:17:21.081211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:14.616 [2024-11-19 01:17:21.081280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:14.616 [2024-11-19 01:17:21.081320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:15.183 [2024-11-19 01:17:21.582410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:15.183 qpair failed and we were unable to recover it. 00:33:15.183 [2024-11-19 01:17:21.584432] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:15.183 [2024-11-19 01:17:21.584456] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:15.183 [2024-11-19 01:17:21.584472] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:15.183 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:15.183 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:33:15.183 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:15.183 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:15.183 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:15.183 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:15.183 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:15.183 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.183 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:15.183 Malloc0 00:33:15.183 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:15.184 [2024-11-19 01:17:21.791978] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6120000298c0/0x617000007c40) succeed. 00:33:15.184 [2024-11-19 01:17:21.801863] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x612000029a40/0x617000007fc0) succeed. 00:33:15.184 [2024-11-19 01:17:21.801904] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:15.184 [2024-11-19 01:17:21.842354] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.184 01:17:21 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 539691 00:33:16.119 [2024-11-19 01:17:22.587383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:16.119 qpair failed and we were unable to recover it. 00:33:16.119 [2024-11-19 01:17:22.589426] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:16.119 [2024-11-19 01:17:22.589451] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:16.119 [2024-11-19 01:17:22.589464] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:17.052 [2024-11-19 01:17:23.592319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:17.052 qpair failed and we were unable to recover it. 00:33:17.052 [2024-11-19 01:17:23.594332] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:17.052 [2024-11-19 01:17:23.594356] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:17.052 [2024-11-19 01:17:23.594369] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:17.984 [2024-11-19 01:17:24.597283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-11-19 01:17:24.599323] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:17.984 [2024-11-19 01:17:24.599348] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:17.984 [2024-11-19 01:17:24.599361] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:18.918 [2024-11-19 01:17:25.602265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-11-19 01:17:25.604408] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:18.918 [2024-11-19 01:17:25.604433] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:18.918 [2024-11-19 01:17:25.604446] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:20.292 [2024-11-19 01:17:26.607353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:20.292 qpair failed and we were unable to recover it. 00:33:20.292 [2024-11-19 01:17:26.609297] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:20.292 [2024-11-19 01:17:26.609322] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:20.292 [2024-11-19 01:17:26.609335] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:21.227 [2024-11-19 01:17:27.612239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:21.227 qpair failed and we were unable to recover it. 00:33:21.227 [2024-11-19 01:17:27.614373] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:21.227 [2024-11-19 01:17:27.614397] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:21.227 [2024-11-19 01:17:27.614413] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:33:22.160 [2024-11-19 01:17:28.617362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:22.160 qpair failed and we were unable to recover it. 00:33:22.726 [2024-11-19 01:17:29.172345] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Read completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Read completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Read completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Read completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Read completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Read completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Read completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Read completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Read completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Write completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.726 Read completed with error (sct=0, sc=8) 00:33:22.726 starting I/O failed 00:33:22.727 Write completed with error (sct=0, sc=8) 00:33:22.727 starting I/O failed 00:33:22.727 Write completed with error (sct=0, sc=8) 00:33:22.727 starting I/O failed 00:33:22.727 Write completed with error (sct=0, sc=8) 00:33:22.727 starting I/O failed 00:33:22.727 [2024-11-19 01:17:29.173394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:33:22.727 [2024-11-19 01:17:29.175363] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:22.727 [2024-11-19 01:17:29.175386] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:22.727 [2024-11-19 01:17:29.175398] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:33:23.664 [2024-11-19 01:17:30.178331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:33:23.664 qpair failed and we were unable to recover it. 00:33:23.664 [2024-11-19 01:17:30.180553] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:23.664 [2024-11-19 01:17:30.180579] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:23.664 [2024-11-19 01:17:30.180591] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:33:24.601 [2024-11-19 01:17:31.183498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.601 qpair failed and we were unable to recover it. 00:33:25.169 [2024-11-19 01:17:31.732342] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Read completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 Write completed with error (sct=0, sc=8) 00:33:25.169 starting I/O failed 00:33:25.169 [2024-11-19 01:17:31.733427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:33:25.169 [2024-11-19 01:17:31.735367] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:25.169 [2024-11-19 01:17:31.735390] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:25.169 [2024-11-19 01:17:31.735403] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:33:26.107 [2024-11-19 01:17:32.739839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:33:26.107 qpair failed and we were unable to recover it. 00:33:26.107 [2024-11-19 01:17:32.741951] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:26.107 [2024-11-19 01:17:32.741976] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:26.107 [2024-11-19 01:17:32.741988] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:33:27.484 [2024-11-19 01:17:33.744878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:33:27.484 qpair failed and we were unable to recover it. 00:33:27.743 [2024-11-19 01:17:34.291342] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Write completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 Read completed with error (sct=0, sc=8) 00:33:27.743 starting I/O failed 00:33:27.743 [2024-11-19 01:17:34.292413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.743 [2024-11-19 01:17:34.294397] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:27.743 [2024-11-19 01:17:34.294422] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:27.743 [2024-11-19 01:17:34.294435] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:33:28.679 [2024-11-19 01:17:35.297387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.679 qpair failed and we were unable to recover it. 00:33:28.679 [2024-11-19 01:17:35.299663] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:28.679 [2024-11-19 01:17:35.299689] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:28.679 [2024-11-19 01:17:35.299701] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:33:29.615 [2024-11-19 01:17:36.302646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:33:29.615 qpair failed and we were unable to recover it. 00:33:29.615 [2024-11-19 01:17:36.302955] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:33:29.615 A controller has encountered a failure and is being reset. 00:33:29.615 Resorting to new failover address 192.168.100.9 00:33:29.615 [2024-11-19 01:17:36.303072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:29.615 [2024-11-19 01:17:36.303161] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:29.874 [2024-11-19 01:17:36.348002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:29.874 Controller properly reset. 00:33:29.874 Initializing NVMe Controllers 00:33:29.874 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:29.874 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:29.874 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:29.874 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:29.874 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:29.874 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:29.874 Initialization complete. Launching workers. 00:33:29.874 Starting thread on core 1 00:33:29.874 Starting thread on core 2 00:33:29.874 Starting thread on core 0 00:33:29.874 Starting thread on core 3 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:33:30.133 00:33:30.133 real 0m19.850s 00:33:30.133 user 1m2.070s 00:33:30.133 sys 0m3.973s 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:30.133 ************************************ 00:33:30.133 END TEST nvmf_target_disconnect_tc3 00:33:30.133 ************************************ 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:30.133 rmmod nvme_rdma 00:33:30.133 rmmod nvme_fabrics 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 540340 ']' 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 540340 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 540340 ']' 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 540340 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 540340 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:33:30.133 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:33:30.134 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 540340' 00:33:30.134 killing process with pid 540340 00:33:30.134 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 540340 00:33:30.134 01:17:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 540340 00:33:31.512 01:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:31.512 01:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:33:31.512 00:33:31.512 real 0m42.139s 00:33:31.512 user 2m39.128s 00:33:31.512 sys 0m11.989s 00:33:31.512 01:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.512 01:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:31.512 ************************************ 00:33:31.512 END TEST nvmf_target_disconnect 00:33:31.512 ************************************ 00:33:31.512 01:17:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:31.512 00:33:31.512 real 7m19.526s 00:33:31.512 user 21m11.341s 00:33:31.512 sys 1m32.313s 00:33:31.512 01:17:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.512 01:17:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.512 ************************************ 00:33:31.512 END TEST nvmf_host 00:33:31.512 ************************************ 00:33:31.512 01:17:38 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:33:31.512 00:33:31.512 real 26m47.220s 00:33:31.512 user 78m37.841s 00:33:31.512 sys 5m54.644s 00:33:31.512 01:17:38 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.512 01:17:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:31.512 ************************************ 00:33:31.512 END TEST nvmf_rdma 00:33:31.512 ************************************ 00:33:31.771 01:17:38 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:33:31.771 01:17:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:31.771 01:17:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:31.771 01:17:38 -- common/autotest_common.sh@10 -- # set +x 00:33:31.771 ************************************ 00:33:31.771 START TEST spdkcli_nvmf_rdma 00:33:31.771 ************************************ 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:33:31.771 * Looking for test storage... 00:33:31.771 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.771 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.772 --rc genhtml_branch_coverage=1 00:33:31.772 --rc genhtml_function_coverage=1 00:33:31.772 --rc genhtml_legend=1 00:33:31.772 --rc geninfo_all_blocks=1 00:33:31.772 --rc geninfo_unexecuted_blocks=1 00:33:31.772 00:33:31.772 ' 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.772 --rc genhtml_branch_coverage=1 00:33:31.772 --rc genhtml_function_coverage=1 00:33:31.772 --rc genhtml_legend=1 00:33:31.772 --rc geninfo_all_blocks=1 00:33:31.772 --rc geninfo_unexecuted_blocks=1 00:33:31.772 00:33:31.772 ' 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.772 --rc genhtml_branch_coverage=1 00:33:31.772 --rc genhtml_function_coverage=1 00:33:31.772 --rc genhtml_legend=1 00:33:31.772 --rc geninfo_all_blocks=1 00:33:31.772 --rc geninfo_unexecuted_blocks=1 00:33:31.772 00:33:31.772 ' 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.772 --rc genhtml_branch_coverage=1 00:33:31.772 --rc genhtml_function_coverage=1 00:33:31.772 --rc genhtml_legend=1 00:33:31.772 --rc geninfo_all_blocks=1 00:33:31.772 --rc geninfo_unexecuted_blocks=1 00:33:31.772 00:33:31.772 ' 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/common.sh 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:31.772 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=543284 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 543284 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 543284 ']' 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.772 01:17:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:32.031 [2024-11-19 01:17:38.535737] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:32.031 [2024-11-19 01:17:38.535830] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid543284 ] 00:33:32.031 [2024-11-19 01:17:38.660416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:32.289 [2024-11-19 01:17:38.769961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.289 [2024-11-19 01:17:38.769980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:33:32.854 01:17:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:38.114 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:38.114 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ rdma == rdma ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:33:38.114 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@403 -- # (( 1 != 1 )) 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@405 -- # modinfo irdma 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@405 -- # modprobe irdma roce_ena=1 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:38.115 Found net devices under 0000:af:00.0: cvl_0_0 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:38.115 Found net devices under 0000:af:00.1: cvl_0_1 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:38.115 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo cvl_0_0 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo cvl_0_1 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:38.373 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address cvl_0_0 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show cvl_0_0 00:33:38.374 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:33:38.374 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:33:38.374 altname enp175s0f0np0 00:33:38.374 altname ens801f0np0 00:33:38.374 inet 192.168.100.8/24 scope global cvl_0_0 00:33:38.374 valid_lft forever preferred_lft forever 00:33:38.374 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:33:38.374 valid_lft forever preferred_lft forever 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address cvl_0_1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show cvl_0_1 00:33:38.374 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:33:38.374 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:33:38.374 altname enp175s0f1np1 00:33:38.374 altname ens801f1np1 00:33:38.374 inet 192.168.100.9/24 scope global cvl_0_1 00:33:38.374 valid_lft forever preferred_lft forever 00:33:38.374 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:33:38.374 valid_lft forever preferred_lft forever 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo cvl_0_0 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo cvl_0_1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address cvl_0_0 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=cvl_0_0 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_0 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address cvl_0_1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=cvl_0_1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show cvl_0_1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:38.374 192.168.100.9' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:38.374 192.168.100.9' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:38.374 192.168.100.9' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:38.374 01:17:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:38.374 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:38.374 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:38.374 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:38.374 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:38.374 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:38.374 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:38.374 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:38.374 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:38.374 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:38.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:38.374 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:38.374 ' 00:33:41.654 [2024-11-19 01:17:47.894906] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x61200002b0c0/0x617000007c40) succeed. 00:33:41.654 [2024-11-19 01:17:47.904864] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x61200002b240/0x617000007fc0) succeed. 00:33:41.654 [2024-11-19 01:17:47.904898] rdma.c:2842:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:33:41.654 [2024-11-19 01:17:47.907194] iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/1535 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:33:41.654 [2024-11-19 01:17:47.907227] iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:33:41.654 [2024-11-19 01:17:47.908601] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:33:41.654 [2024-11-19 01:17:47.910601] iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/1535 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:33:41.654 [2024-11-19 01:17:47.910627] iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:33:41.654 [2024-11-19 01:17:47.911957] transport.c: 636:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:33:42.587 [2024-11-19 01:17:49.244566] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:33:45.114 [2024-11-19 01:17:51.720891] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:33:47.675 [2024-11-19 01:17:53.864011] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:33:49.048 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:49.048 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:49.048 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:49.048 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:49.048 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:49.048 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:49.048 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:49.048 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:49.048 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:49.048 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:49.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:49.048 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:49.048 01:17:55 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:49.048 01:17:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:49.048 01:17:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:49.048 01:17:55 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:49.048 01:17:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.048 01:17:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:49.048 01:17:55 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:33:49.048 01:17:55 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:49.613 01:17:56 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:49.613 01:17:56 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:49.613 01:17:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:49.613 01:17:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:49.613 01:17:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:49.613 01:17:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:49.613 01:17:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.614 01:17:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:49.614 01:17:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:49.614 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:49.614 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:49.614 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:49.614 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:33:49.614 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:33:49.614 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:49.614 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:49.614 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:49.614 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:49.614 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:49.614 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:49.614 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:49.614 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:49.614 ' 00:33:56.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:56.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:56.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:56.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:56.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:33:56.169 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:33:56.169 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:56.169 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:56.169 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:56.169 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:56.169 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:56.169 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:56.169 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:56.169 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 543284 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 543284 ']' 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 543284 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 543284 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 543284' 00:33:56.169 killing process with pid 543284 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 543284 00:33:56.169 01:18:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 543284 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:57.104 rmmod nvme_rdma 00:33:57.104 rmmod nvme_fabrics 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:33:57.104 00:33:57.104 real 0m25.352s 00:33:57.104 user 0m55.310s 00:33:57.104 sys 0m5.180s 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:57.104 01:18:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:57.104 ************************************ 00:33:57.104 END TEST spdkcli_nvmf_rdma 00:33:57.104 ************************************ 00:33:57.104 01:18:03 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:57.104 01:18:03 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:57.104 01:18:03 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:57.104 01:18:03 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:57.104 01:18:03 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:57.104 01:18:03 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:57.104 01:18:03 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:57.104 01:18:03 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:57.104 01:18:03 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:57.104 01:18:03 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:57.104 01:18:03 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:57.104 01:18:03 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:57.104 01:18:03 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:57.104 01:18:03 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:57.104 01:18:03 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:57.104 01:18:03 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:57.104 01:18:03 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:57.104 01:18:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:57.104 01:18:03 -- common/autotest_common.sh@10 -- # set +x 00:33:57.104 01:18:03 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:57.104 01:18:03 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:57.104 01:18:03 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:57.104 01:18:03 -- common/autotest_common.sh@10 -- # set +x 00:34:02.380 INFO: APP EXITING 00:34:02.380 INFO: killing all VMs 00:34:02.380 INFO: killing vhost app 00:34:02.380 INFO: EXIT DONE 00:34:04.927 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:04.927 Waiting for block devices as requested 00:34:04.927 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:05.186 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:05.186 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:05.186 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:05.445 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:05.445 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:05.445 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:05.704 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:05.704 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:05.704 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:05.704 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:05.964 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:05.964 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:05.964 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:06.223 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:06.223 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:06.223 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:08.760 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:09.328 Cleaning 00:34:09.328 Removing: /var/run/dpdk/spdk0/config 00:34:09.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:09.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:09.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:09.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:09.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:09.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:09.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:09.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:09.328 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:09.328 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:09.328 Removing: /var/run/dpdk/spdk1/config 00:34:09.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:09.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:09.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:09.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:09.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:09.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:09.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:09.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:09.328 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:09.328 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:09.328 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:09.328 Removing: /var/run/dpdk/spdk2/config 00:34:09.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:09.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:09.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:09.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:09.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:09.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:09.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:09.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:09.328 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:09.328 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:09.328 Removing: /var/run/dpdk/spdk3/config 00:34:09.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:09.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:09.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:09.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:09.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:09.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:09.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:09.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:09.328 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:09.328 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:09.328 Removing: /var/run/dpdk/spdk4/config 00:34:09.328 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:09.328 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:09.328 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:09.328 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:09.328 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:09.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:09.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:09.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:09.588 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:09.588 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:09.588 Removing: /dev/shm/bdevperf_trace.pid198220 00:34:09.588 Removing: /dev/shm/bdev_svc_trace.1 00:34:09.588 Removing: /dev/shm/nvmf_trace.0 00:34:09.588 Removing: /dev/shm/spdk_tgt_trace.pid144411 00:34:09.588 Removing: /var/run/dpdk/spdk0 00:34:09.588 Removing: /var/run/dpdk/spdk1 00:34:09.588 Removing: /var/run/dpdk/spdk2 00:34:09.588 Removing: /var/run/dpdk/spdk3 00:34:09.588 Removing: /var/run/dpdk/spdk4 00:34:09.588 Removing: /var/run/dpdk/spdk_pid139940 00:34:09.588 Removing: /var/run/dpdk/spdk_pid141600 00:34:09.588 Removing: /var/run/dpdk/spdk_pid144411 00:34:09.588 Removing: /var/run/dpdk/spdk_pid145281 00:34:09.588 Removing: /var/run/dpdk/spdk_pid146666 00:34:09.588 Removing: /var/run/dpdk/spdk_pid147170 00:34:09.588 Removing: /var/run/dpdk/spdk_pid148554 00:34:09.588 Removing: /var/run/dpdk/spdk_pid148783 00:34:09.588 Removing: /var/run/dpdk/spdk_pid149399 00:34:09.588 Removing: /var/run/dpdk/spdk_pid154415 00:34:09.588 Removing: /var/run/dpdk/spdk_pid155902 00:34:09.588 Removing: /var/run/dpdk/spdk_pid156654 00:34:09.588 Removing: /var/run/dpdk/spdk_pid157395 00:34:09.588 Removing: /var/run/dpdk/spdk_pid158157 00:34:09.588 Removing: /var/run/dpdk/spdk_pid158904 00:34:09.588 Removing: /var/run/dpdk/spdk_pid159156 00:34:09.588 Removing: /var/run/dpdk/spdk_pid159409 00:34:09.588 Removing: /var/run/dpdk/spdk_pid159818 00:34:09.588 Removing: /var/run/dpdk/spdk_pid160822 00:34:09.588 Removing: /var/run/dpdk/spdk_pid164120 00:34:09.588 Removing: /var/run/dpdk/spdk_pid164783 00:34:09.588 Removing: /var/run/dpdk/spdk_pid165490 00:34:09.588 Removing: /var/run/dpdk/spdk_pid165717 00:34:09.588 Removing: /var/run/dpdk/spdk_pid167351 00:34:09.588 Removing: /var/run/dpdk/spdk_pid167578 00:34:09.588 Removing: /var/run/dpdk/spdk_pid169234 00:34:09.588 Removing: /var/run/dpdk/spdk_pid169437 00:34:09.588 Removing: /var/run/dpdk/spdk_pid170144 00:34:09.588 Removing: /var/run/dpdk/spdk_pid170352 00:34:09.588 Removing: /var/run/dpdk/spdk_pid170869 00:34:09.588 Removing: /var/run/dpdk/spdk_pid171098 00:34:09.588 Removing: /var/run/dpdk/spdk_pid172569 00:34:09.588 Removing: /var/run/dpdk/spdk_pid172823 00:34:09.588 Removing: /var/run/dpdk/spdk_pid173116 00:34:09.588 Removing: /var/run/dpdk/spdk_pid177480 00:34:09.588 Removing: /var/run/dpdk/spdk_pid182208 00:34:09.588 Removing: /var/run/dpdk/spdk_pid192395 00:34:09.588 Removing: /var/run/dpdk/spdk_pid193309 00:34:09.588 Removing: /var/run/dpdk/spdk_pid198220 00:34:09.588 Removing: /var/run/dpdk/spdk_pid198467 00:34:09.588 Removing: /var/run/dpdk/spdk_pid202910 00:34:09.588 Removing: /var/run/dpdk/spdk_pid208856 00:34:09.588 Removing: /var/run/dpdk/spdk_pid211758 00:34:09.588 Removing: /var/run/dpdk/spdk_pid222088 00:34:09.588 Removing: /var/run/dpdk/spdk_pid246975 00:34:09.588 Removing: /var/run/dpdk/spdk_pid251003 00:34:09.588 Removing: /var/run/dpdk/spdk_pid336137 00:34:09.588 Removing: /var/run/dpdk/spdk_pid341256 00:34:09.588 Removing: /var/run/dpdk/spdk_pid346788 00:34:09.588 Removing: /var/run/dpdk/spdk_pid356217 00:34:09.847 Removing: /var/run/dpdk/spdk_pid387810 00:34:09.847 Removing: /var/run/dpdk/spdk_pid393251 00:34:09.847 Removing: /var/run/dpdk/spdk_pid431449 00:34:09.847 Removing: /var/run/dpdk/spdk_pid433183 00:34:09.847 Removing: /var/run/dpdk/spdk_pid434936 00:34:09.847 Removing: /var/run/dpdk/spdk_pid439184 00:34:09.847 Removing: /var/run/dpdk/spdk_pid444666 00:34:09.847 Removing: /var/run/dpdk/spdk_pid451746 00:34:09.847 Removing: /var/run/dpdk/spdk_pid452848 00:34:09.847 Removing: /var/run/dpdk/spdk_pid453974 00:34:09.847 Removing: /var/run/dpdk/spdk_pid455105 00:34:09.847 Removing: /var/run/dpdk/spdk_pid455466 00:34:09.847 Removing: /var/run/dpdk/spdk_pid460192 00:34:09.847 Removing: /var/run/dpdk/spdk_pid460199 00:34:09.847 Removing: /var/run/dpdk/spdk_pid464653 00:34:09.847 Removing: /var/run/dpdk/spdk_pid465318 00:34:09.847 Removing: /var/run/dpdk/spdk_pid465782 00:34:09.847 Removing: /var/run/dpdk/spdk_pid465900 00:34:09.847 Removing: /var/run/dpdk/spdk_pid467614 00:34:09.847 Removing: /var/run/dpdk/spdk_pid469694 00:34:09.847 Removing: /var/run/dpdk/spdk_pid471773 00:34:09.847 Removing: /var/run/dpdk/spdk_pid473589 00:34:09.847 Removing: /var/run/dpdk/spdk_pid475465 00:34:09.847 Removing: /var/run/dpdk/spdk_pid477264 00:34:09.847 Removing: /var/run/dpdk/spdk_pid483172 00:34:09.847 Removing: /var/run/dpdk/spdk_pid483928 00:34:09.847 Removing: /var/run/dpdk/spdk_pid485648 00:34:09.847 Removing: /var/run/dpdk/spdk_pid486838 00:34:09.847 Removing: /var/run/dpdk/spdk_pid492544 00:34:09.847 Removing: /var/run/dpdk/spdk_pid495468 00:34:09.847 Removing: /var/run/dpdk/spdk_pid501103 00:34:09.847 Removing: /var/run/dpdk/spdk_pid511694 00:34:09.848 Removing: /var/run/dpdk/spdk_pid511711 00:34:09.848 Removing: /var/run/dpdk/spdk_pid530917 00:34:09.848 Removing: /var/run/dpdk/spdk_pid531308 00:34:09.848 Removing: /var/run/dpdk/spdk_pid537304 00:34:09.848 Removing: /var/run/dpdk/spdk_pid537797 00:34:09.848 Removing: /var/run/dpdk/spdk_pid539691 00:34:09.848 Removing: /var/run/dpdk/spdk_pid543284 00:34:09.848 Clean 00:34:09.848 01:18:16 -- common/autotest_common.sh@1453 -- # return 0 00:34:09.848 01:18:16 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:09.848 01:18:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:09.848 01:18:16 -- common/autotest_common.sh@10 -- # set +x 00:34:10.106 01:18:16 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:10.106 01:18:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.106 01:18:16 -- common/autotest_common.sh@10 -- # set +x 00:34:10.106 01:18:16 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/timing.txt 00:34:10.106 01:18:16 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/udev.log ]] 00:34:10.106 01:18:16 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/udev.log 00:34:10.106 01:18:16 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:10.106 01:18:16 -- spdk/autotest.sh@398 -- # hostname 00:34:10.107 01:18:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_test.info 00:34:10.107 geninfo: WARNING: invalid characters removed from testname! 00:34:32.058 01:18:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:34:32.058 01:18:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:34:33.961 01:18:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:34:35.339 01:18:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:34:37.243 01:18:43 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:34:39.146 01:18:45 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:34:41.052 01:18:47 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:41.052 01:18:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:41.052 01:18:47 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/timing.txt ]] 00:34:41.052 01:18:47 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:41.052 01:18:47 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:41.052 01:18:47 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/timing.txt 00:34:41.052 + [[ -n 62348 ]] 00:34:41.052 + sudo kill 62348 00:34:41.063 [Pipeline] } 00:34:41.078 [Pipeline] // stage 00:34:41.083 [Pipeline] } 00:34:41.097 [Pipeline] // timeout 00:34:41.102 [Pipeline] } 00:34:41.116 [Pipeline] // catchError 00:34:41.121 [Pipeline] } 00:34:41.137 [Pipeline] // wrap 00:34:41.143 [Pipeline] } 00:34:41.156 [Pipeline] // catchError 00:34:41.177 [Pipeline] stage 00:34:41.179 [Pipeline] { (Epilogue) 00:34:41.192 [Pipeline] catchError 00:34:41.195 [Pipeline] { 00:34:41.208 [Pipeline] echo 00:34:41.210 Cleanup processes 00:34:41.216 [Pipeline] sh 00:34:41.504 + sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:34:41.504 559834 sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:34:41.518 [Pipeline] sh 00:34:41.878 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:34:41.878 ++ grep -v 'sudo pgrep' 00:34:41.878 ++ awk '{print $1}' 00:34:41.878 + sudo kill -9 00:34:41.878 + true 00:34:41.956 [Pipeline] sh 00:34:42.283 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:52.586 [Pipeline] sh 00:34:52.874 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:52.874 Artifacts sizes are good 00:34:52.888 [Pipeline] archiveArtifacts 00:34:52.894 Archiving artifacts 00:34:53.301 [Pipeline] sh 00:34:53.587 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:34:53.602 [Pipeline] cleanWs 00:34:53.611 [WS-CLEANUP] Deleting project workspace... 00:34:53.611 [WS-CLEANUP] Deferred wipeout is used... 00:34:53.618 [WS-CLEANUP] done 00:34:53.619 [Pipeline] } 00:34:53.635 [Pipeline] // catchError 00:34:53.646 [Pipeline] sh 00:34:53.930 + logger -p user.info -t JENKINS-CI 00:34:53.939 [Pipeline] } 00:34:53.951 [Pipeline] // stage 00:34:53.956 [Pipeline] } 00:34:53.970 [Pipeline] // node 00:34:53.975 [Pipeline] End of Pipeline 00:34:54.011 Finished: SUCCESS